WorldWideScience

Sample records for model predicts critical

  1. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  2. Predictability of Critical Transitions

    CERN Document Server

    Zhang, Xiaozhu; Hallerberg, Sarah

    2015-01-01

    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socio-economic changes and climate transitions between ice-ages and warm-ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However especially in the presence of noise it is not clear, whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the the quadratic integrate-and-fire model and the van der Pol model, under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictabil...

  3. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist.

    Directory of Open Access Journals (Sweden)

    Karel G M Moons

    2014-10-01

    Full Text Available Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies. Please see later in the article for the Editors' Summary.

  4. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    Science.gov (United States)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  5. Prediction of Critical Power and W′ in Hypoxia: Application to Work-Balance Modelling

    Science.gov (United States)

    Townsend, Nathan E.; Nichols, David S.; Skiba, Philip F.; Racinais, Sebastien; Périard, Julien D.

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W′) in hypoxia for use in the work-balance (WBAL′) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W′ at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W′ at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W′ were used to compute W′ during HIIT using differential (WBALdiff′) and integral (WBALint′) forms of the WBAL′ model. Results: CP decreased at altitude (P hypoxia. This enables the application of WBAL′ modelling to training prescription and competition analysis at altitude. PMID:28386237

  6. Criticality Model

    Energy Technology Data Exchange (ETDEWEB)

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of

  7. Development and validation of a seizure prediction model in critically ill children.

    Science.gov (United States)

    Yang, Amy; Arndt, Daniel H; Berg, Robert A; Carpenter, Jessica L; Chapman, Kevin E; Dlugos, Dennis J; Gallentine, William B; Giza, Christopher C; Goldstein, Joshua L; Hahn, Cecil D; Lerner, Jason T; Loddenkemper, Tobias; Matsumoto, Joyce H; Nash, Kendall B; Payne, Eric T; Sánchez Fernández, Iván; Shults, Justine; Topjian, Alexis A; Williams, Korwyn; Wusthoff, Courtney J; Abend, Nicholas S

    2015-02-01

    Electrographic seizures are common in encephalopathic critically ill children, but identification requires continuous EEG monitoring (CEEG). Development of a seizure prediction model would enable more efficient use of limited CEEG resources. We aimed to develop and validate a seizure prediction model for use among encephalopathic critically ill children. We developed a seizure prediction model using a retrospectively acquired multi-center database of children with acute encephalopathy without an epilepsy diagnosis, who underwent clinically indicated CEEG. We performed model validation using a separate prospectively acquired single center database. Predictor variables were chosen to be readily available to clinicians prior to the onset of CEEG and included: age, etiology category, clinical seizures prior to CEEG, initial EEG background category, and inter-ictal discharge category. The model has fair to good discrimination ability and overall performance. At the optimal cut-off point in the validation dataset, the model has a sensitivity of 59% and a specificity of 81%. Varied cut-off points could be chosen to optimize sensitivity or specificity depending on available CEEG resources. Despite inherent variability between centers, a model developed using multi-center CEEG data and few readily available variables could guide the use of limited CEEG resources when applied at a single center. Depending on CEEG resources, centers could choose lower cut-off points to maximize identification of all patients with seizures (but with more patients monitored) or higher cut-off points to reduce resource utilization by reducing monitoring of lower risk patients (but with failure to identify some patients with seizures). Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  8. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models

    Science.gov (United States)

    Binder, Kurt; Luijten, Erik

    2001-04-01

    A critical review is given of status and perspectives of Monte Carlo simulations that address bulk and interfacial phase transitions of ferromagnetic Ising models. First, some basic methodological aspects of these simulations are briefly summarized (single-spin flip vs. cluster algorithms, finite-size scaling concepts), and then the application of these techniques to the nearest-neighbor Ising model in d=3 and 5 dimensions is described, and a detailed comparison to theoretical predictions is made. In addition, the case of Ising models with a large but finite range of interaction and the crossover scaling from mean-field behavior to the Ising universality class are treated. If one considers instead a long-range interaction described by a power-law decay, new classes of critical behavior depending on the exponent of this power law become accessible, and a stringent test of the ε-expansion becomes possible. As a final type of crossover from mean-field type behavior to two-dimensional Ising behavior, the interface localization-delocalization transition of Ising films confined between “competing” walls is considered. This problem is still hampered by questions regarding the appropriate coarse-grained model for the fluctuating interface near a wall, which is the starting point for both this problem and the theory of critical wetting.

  9. Prediction of Critical Power and W' in Hypoxia: Application to Work-Balance Modelling.

    Science.gov (United States)

    Townsend, Nathan E; Nichols, David S; Skiba, Philip F; Racinais, Sebastien; Périard, Julien D

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W') in hypoxia for use in the work-balance ([Formula: see text]) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W' at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W' at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W' were used to compute W' during HIIT using differential ([Formula: see text]) and integral ([Formula: see text]) forms of the [Formula: see text] model. Results: CP decreased at altitude (P CP and W') on modelled [Formula: see text] at 2,250 m (P = 0.24). [Formula: see text] returned higher values than [Formula: see text] throughout HIIT (P CP and W' developed in this study are suitable for use with the [Formula: see text] model in acute hypoxia. This enables the application of [Formula: see text] modelling to training prescription and competition analysis at altitude.

  10. A critical review of statistical calibration/prediction models handling data inconsistency and model inadequacy

    CERN Document Server

    Pernot, Pascal

    2016-01-01

    Inference of physical parameters from reference data is a well studied problem with many intricacies (inconsistent sets of data due to experimental systematic errors, approximate physical models...). The complexity is further increased when the inferred parameters are used to make predictions (virtual measurements) because parameters uncertainty has to be estimated in addition to parameters best value. The literature is rich in statistical models for the calibration/prediction problem, each having benefits and limitations. We review and evaluate standard and state-of-the-art statistical models in a common bayesian framework, and test them on synthetic and real datasets of temperature-dependent viscosity for the calibration of Lennard-Jones parameters of a Chapman-Enskog model.

  11. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  12. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  13. Radiation induced dissolution of UO 2 based nuclear fuel - A critical review of predictive modelling approaches

    Science.gov (United States)

    Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats

    2012-01-01

    Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.

  14. Thermal hydraulic test for reactor safety system - Critical heat flux experiment and development of prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Heung; Baek, Won Pil; Yang, Soo Hyung; No, Chang Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    2000-04-01

    To acquire CHF data through the experiments and develop prediction models, research was conducted. Final objectives of research are as follows: 1) Production of tube CHF data for low and middle pressure and mass flux and Flow Boiling Visualization. 2) Modification and suggestion of tube CHF prediction models. 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. The major results of research are as follows: 1) Production of the CHF data for low and middle pressure and mass flux. - Acquisition of CHF data (764) for low and middle pressure and flow conditions - Analysis of CHF trends based on the CHF data - Assessment of existing CHF prediction methods with the CHF data 2) Modification and suggestion of tube CHF prediction models. - Development of a unified CHF model applicable for a wide parametric range - Development of a threshold length correlation - Improvement of CHF look-up table using the threshold length correlation 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. - Development of bundle CHF prediction methodology using correction factor. 11 refs., 134 figs., 25 tabs. (Author)

  15. Patient-Specific Predictive Modeling Using Random Forests: An Observational Study for the Critically Ill

    Science.gov (United States)

    2017-01-01

    Background With a large-scale electronic health record repository, it is feasible to build a customized patient outcome prediction model specifically for a given patient. This approach involves identifying past patients who are similar to the present patient and using their data to train a personalized predictive model. Our previous work investigated a cosine-similarity patient similarity metric (PSM) for such patient-specific predictive modeling. Objective The objective of the study is to investigate the random forest (RF) proximity measure as a PSM in the context of personalized mortality prediction for intensive care unit (ICU) patients. Methods A total of 17,152 ICU admissions were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II database. A number of predictor variables were extracted from the first 24 hours in the ICU. Outcome to be predicted was 30-day mortality. A patient-specific predictive model was trained for each ICU admission using an RF PSM inspired by the RF proximity measure. Death counting, logistic regression, decision tree, and RF models were studied with a hard threshold applied to RF PSM values to only include the M most similar patients in model training, where M was varied. In addition, case-specific random forests (CSRFs), which uses RF proximity for weighted bootstrapping, were trained. Results Compared to our previous study that investigated a cosine similarity PSM, the RF PSM resulted in superior or comparable predictive performance. RF and CSRF exhibited the best performances (in terms of mean area under the receiver operating characteristic curve [95% confidence interval], RF: 0.839 [0.835-0.844]; CSRF: 0.832 [0.821-0.843]). RF and CSRF did not benefit from personalization via the use of the RF PSM, while the other models did. Conclusions The RF PSM led to good mortality prediction performance for several predictive models, although it failed to induce improved performance in RF and CSRF. The distinction

  16. Predicting the local impacts of energy development: a critical guide to forecasting methods and models

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, D.; O' Hare, M.

    1977-05-01

    Models forecasting second-order impacts from energy development vary in their methodology, output, assumptions, and quality. As a rough dichotomy, they either simulate community development over time or combine various submodels providing community snapshots at selected points in time. Using one or more methods - input/output models, gravity models, econometric models, cohort-survival models, or coefficient models - they estimate energy-development-stimulated employment, population, public and private service needs, and government revenues and expenditures at some future time (ranging from annual to average year predictions) and for different governmental jurisdictions (municipal, county, state, etc.). Underlying assumptions often conflict, reflecting their different sources - historical data, comparative data, surveys, and judgments about future conditions. Model quality, measured by special features, tests, exportability and usefulness to policy-makers, reveals careful and thorough work in some cases and hurried operations with insufficient in-depth analysis in others.

  17. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  18. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Seaman, N.L.; Guo, Z.; Ackerman, T.P. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  19. A comparison of administrative and physiologic predictive models in determining risk adjusted mortality rates in critically ill patients.

    Directory of Open Access Journals (Sweden)

    Kyle B Enfield

    Full Text Available BACKGROUND: Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. METHODS: We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. RESULTS: We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8% and 24.6% (95% CI 22.7%, 26.5% (t-test p-value<0.001. The r(2 for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. CONCLUSIONS: In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results

  20. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....

  1. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  2. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...

  3. Critical Velocity and Anaerobic Paddling Capacity Determined by Different Mathematical Models and Number of Predictive Trials in Canoe Slalom

    Directory of Open Access Journals (Sweden)

    Leonardo H. D. Messias, Homero G. Ferrari, Ivan G. M. Reis, Pedro P. M. Scariot, Fúlvia B. Manchado-Gobatto

    2015-03-01

    Full Text Available The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1 were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter and anaerobic paddling capacity (APC-anaerobic parameter were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity. Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC was considered as the four trials (150, 300, 450 and 600 m. High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00. Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006 and APC (p = 0.016 as well as R² (p = 0.033. Estimates obtained from the first (1 and the fourth (4 predictive trials (150 m = lowest; and 600 m = highest, respectively were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates.

  4. Review:Liquid film dryout model for predicting critical heat flux in annular two-phase flow

    Institute of Scientific and Technical Information of China (English)

    Bo JIAO; Li-min QIU; Jun-liang LU; Zhi-hua GAN

    2009-01-01

    Gas-liquid two-phase flow and heat transfer can be encountered in numerous fields, such as chemical engineering, refrigeration, nuclear power reactor, metallurgical industry, spaceflight. Its critical heat flux (CHF) is one of the most important factors for the system security of engineering applications. Since annular flow is the most common flow pattern in gas-liquid two-phase flow, predicting CHF of annular two-phase flow is more significant. Many studies have shown that the liquid film dryout model is successful for that prediction, and determining the following parameters will exert predominant effects on the accuracy of this model: onset of annular flow, inception criterion for droplets entrainment, entrainment fraction, droplets deposi-tion and entrainment rates. The main theoretical results achieved on the above five parameters are reviewed; also, limitations in the existing studies and problems for further research are discussed.

  5. Investigating Predictive Role of Critical Thinking on Metacognition with Structural Equation Modeling

    Science.gov (United States)

    Arslan, Serhat

    2015-01-01

    The purpose of this study is to examine the relationships between critical thinking and metacognition. The sample of study consists of 390 university students who were enrolled in different programs at Sakarya University, in Turkey. In this study, the Critical Thinking Disposition Scale and Metacognitive Thinking Scale were used. The relationships…

  6. Vmax estimate from three-parameter critical velocity models: validity and impact on 800 m running performance prediction.

    Science.gov (United States)

    Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc

    2006-05-01

    The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83models had a better association (0.93models (0.83models appear to have a better predictive value for short duration events such as the 800 m, the fact the Vmax is not associated with the ability it is supposed to reflect suggests that they are more empirical than systems models.

  7. Comparison between SSF and Critical-Plane models to predict fatigue lives under multiaxial proportional load histories

    Directory of Open Access Journals (Sweden)

    Manuel de Freitas

    2016-10-01

    Full Text Available Materials can be classified as shear or tensile sensitive, depending on the main fatigue microcrack initiation process under multiaxial loadings. The nature of the initiating microcrack can be evaluated from a stress scale factor (SSF, which usually multiplies the hydrostatic or the normal stress term from the adopted multiaxial fatigue damage parameter. Low SSF values are associated with a shear-sensitive material, while a large SSF indicates that a tensile-based multiaxial fatigue damage model should be used instead. For tension-torsion histories, a recent published approach combines the shear and normal stress amplitudes using a SSF polynomial function that depends on the stress amplitude ratio (SAR between the shear and the normal components. Alternatively, critical-plane models calculate damage on the plane where damage is maximized, adopting a SSF value that is assumed constant for a given material, sometimes varying with the fatigue life (in cycles, but not with the SAR, the stress amplitude level, or the loading path shape. In this work, in-phase proportional tension-torsion tests in 42CrMo4 steel specimens for several values of the SAR are presented. The SSF approach is then compared with critical-plane models, based on their predicted fatigue lives and the observed values for these tension-torsion histories

  8. Modeling and validation of a mechanistic tool (MEFISTO) for the prediction of critical power in BWR fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Adamsson, Carl, E-mail: carl.adamsson@psi.ch [Westinghouse Electric Sweden, SE-721 63, Vaesteras (Sweden); Le Corre, Jean-Marie, E-mail: lecorrjm@westinghouse.com [Westinghouse Electric Sweden, SE-721 63, Vaesteras (Sweden)

    2011-08-15

    Highlights: > The MEFISTO code efficiently and accurately predicts the dryout event in a BWR fuel bundle, using a mechanistic model. > A hybrid approach between a fast and robust sub-channel analysis and a three-field two-phase analysis is adopted. > MEFISTO modeling approach, calibration, CPU usage, sensitivity, trend analysis and performance evaluation are presented. > The calibration parameters and process were carefully selected to preserve the mechanistic nature of the code. > The code dryout prediction performance is near the level of fuel-specific empirical dryout correlations. - Abstract: Westinghouse is currently developing the MEFISTO code with the main goal to achieve fast, robust, practical and reliable prediction of steady-state dryout Critical Power in Boiling Water Reactor (BWR) fuel bundle based on a mechanistic approach. A computationally efficient simulation scheme was used to achieve this goal, where the code resolves all relevant field (drop, steam and multi-film) mass balance equations, within the annular flow region, at the sub-channel level while relying on a fast and robust two-phase (liquid/steam) sub-channel solution to provide the cross-flow information. The MEFISTO code can hence provide highly detailed solution of the multi-film flow in BWR fuel bundle while enhancing flexibility and reducing the computer time by an order of magnitude as compared to a standard three-field sub-channel analysis approach. Models for the numerical computation of the one-dimensional field flowrate distributions in an open channel (e.g. a sub-channel), including the numerical treatment of field cross-flows, part-length rods, spacers grids and post-dryout conditions are presented in this paper. The MEFISTO code is then applied to dryout prediction in BWR fuel bundle using VIPRE-W as a fast and robust two-phase sub-channel driver code. The dryout power is numerically predicted by iterating on the bundle power so that the minimum film flowrate in the bundle

  9. An Initial Critical Summary of Models for Predicting the Attenuation of Radio Waves by Trees

    Science.gov (United States)

    1982-07-01

    the obstr &.. entered lei Stooki 20, It different tro. Report) III. SUPPLEMENTARY NOTES PROPAGATION MODELS FOLIAGE PROPAGATION LOSS JUNGLE FOLIAGE...WASHINGTON, DC 20305 RAPID. DEPLOYMENT AGENCY1ATTN• RDJ 6-T MACDILL AFH, FL 33608 Other HQ JEWC ATTN: MR. MESKILL SAN ANTONIO, TX 78243 MR, WILLIAM DANIEL 1

  10. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, D.W.H. [Univ. of British Columbia (Canada); Abdelmessih, A.H. [Univ. of Toronto, Ontario (Canada)

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  11. Computational modelling of ovine critical-sized tibial defects with implanted scaffolds and prediction of the safety of fixator removal.

    Science.gov (United States)

    Doyle, Heather; Lohfeld, Stefan; Dürselen, Lutz; McHugh, Peter

    2015-04-01

    Computational model geometries of tibial defects with two types of implanted tissue engineering scaffolds, β-tricalcium phosphate (β-TCP) and poly-ε-caprolactone (PCL)/β-TCP, are constructed from µ-CT scan images of the real in vivo defects. Simulations of each defect under four-point bending and under simulated in vivo axial compressive loading are performed. The mechanical stability of each defect is analysed using stress distribution analysis. The results of this analysis highlights the influence of callus volume, and both scaffold volume and stiffness, on the load-bearing abilities of these defects. Clinically-used image-based methods to predict the safety of removing external fixation are evaluated for each defect. Comparison of these measures with the results of computational analyses indicates that care must be taken in the interpretation of these measures.

  12. Modeling Dependencies in Critical Infrastructures

    NARCIS (Netherlands)

    Nieuwenhuijs, A.H.; Luiijf, H.A.M.; Klaver, M.H.A.

    2009-01-01

    This paper describes a model for expressing critical infrastructure dependencies. The model addresses the limitations of existing approaches with respect to clarity of definition, support for quality and the influence of operating states of critical infrastructures and environmental factors.

  13. Modeling the effects of light and temperature on algae growth: state of the art and critical assessment for productivity prediction during outdoor cultivation.

    Science.gov (United States)

    Béchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2013-12-01

    The ability to model algal productivity under transient conditions of light intensity and temperature is critical for assessing the profitability and sustainability of full-scale algae cultivation outdoors. However, a review of over 40 modeling approaches reveals that most of the models hitherto described in the literature have not been validated under conditions relevant to outdoor cultivation. With respect to light intensity, we therefore categorized and assessed these models based on their theoretical ability to account for the light gradients and short light cycles experienced in well-mixed dense outdoor cultures. Type I models were defined as models predicting the rate of photosynthesis of the entire culture as a function of the incident or average light intensity reaching the culture. Type II models were defined as models computing productivity as the sum of local productivities within the cultivation broth (based on the light intensity locally experienced by individual cells) without consideration of short light cycles. Type III models were then defined as models considering the impacts of both light gradients and short light cycles. Whereas Type I models are easy to implement, they are theoretically not applicable to outdoor systems outside the range of experimental conditions used for their development. By contrast, Type III models offer significant refinement but the complexity of the inputs needed currently restricts their practical application. We therefore propose that Type II models currently offer the best compromise between accuracy and practicability for full scale engineering application. With respect to temperature, we defined as "coupled" and "uncoupled" models the approaches which account and do not account for the potential interdependence of light and temperature on the rate of photosynthesis, respectively. Due to the high number of coefficients of coupled models and the associated risk of overfitting, the recommended approach is uncoupled

  14. Test of phi(sup 2) model predictions near the (sup 3)He liquid-gas critical point

    Science.gov (United States)

    Barmatz, M.; Zhong, F.; Hahn, I.

    2000-01-01

    NASA is supporting the development of an experiment called MISTE (Microgravity Scaling Theory Experiment) for future International Space Station mission. The main objective of this flight experiment is to perform in-situ PVT, heat capacity at constant volume, C(sub v) and chi(sub tau), measurements in the asymptotic region near the (sup 3)He liquid-gas critical point.

  15. Predicting Critical Speeds in Rotordynamics: A New Method

    Science.gov (United States)

    Knight, J. D.; Virgin, L. N.; Plaut, R. H.

    2016-09-01

    In rotordynamics, it is often important to be able to predict critical speeds. The passage through resonance is generally difficult to model. Rotating shafts with a disk are analyzed in this study, and experiments are conducted with one and two disks on a shaft. The approach presented here involves the use of a relatively simple prediction technique, and since it is a black-box data-based approach, it is suitable for in-situ applications.

  16. Development and validation of risk-stratification delirium prediction model for critically ill patients: A prospective, observational, single-center study.

    Science.gov (United States)

    Chen, Yu; Du, Hang; Wei, Bao-Hua; Chang, Xue-Ni; Dong, Chen-Ming

    2017-07-01

    The objective is to develop a model based on risk stratification to predict delirium among adult critically ill patients and whether early intervention could be provided for high-risk patients, which could reduce the incidence of delirium.We designed a prospective, observational, single-center study. We examined 11 factors, including age, APACHE-II score, coma, emergency operation, mechanical ventilation (MV), multiple trauma, metabolic acidosis, history of hypertension, delirium and dementia, and application of Dexmedetomidine Hydrochloride. Confusion assessment method for the intensive care unit (CAM-ICU) was performed to screen patients during their ICU stay. Multivariate logistic regression analysis was used to develop the model, and we assessed the predictive ability of the model by using the area under the receiver operating characteristics curve (AUROC).From May 17, 2016 to September 25, 2016, 681 consecutive patients were screened, 61 of whom were excluded. The most frequent reason for exclusion was sustained coma 30 (4.4%), followed by a length of stay in the ICU delirium before ICU admission 13 (1.9%). Among the remaining 620 patients (including 162 nervous system disease patients), 160 patients (25.8%) developed delirium, and 64 (39.5%) had nervous system disease. The mean age was 55 ± 18 years old, the mean APACHE-II score was 16 ± 4, and 49.2% of them were male. Spearman analysis of nervous system disease and incidence of delirium showed that the correlation coefficient was 0.186 (P delirium in critically ill patients and further determined that prophylaxis with Dexmedetomidine Hydrochloride in delirious ICU patients was beneficial. Patients who suffer from nervous system disease are at a higher incidence of delirium, and corresponding measures should be used for prevention. ChiCTR-OOC-16008535.

  17. Medical image analysis methods in MR/CT-imaged acute-subacute ischemic stroke lesion: Segmentation, prediction and insights into dynamic evolution simulation models. A critical appraisal☆

    Science.gov (United States)

    Rekik, Islem; Allassonnière, Stéphanie; Carpenter, Trevor K.; Wardlaw, Joanna M.

    2012-01-01

    Over the last 15 years, basic thresholding techniques in combination with standard statistical correlation-based data analysis tools have been widely used to investigate different aspects of evolution of acute or subacute to late stage ischemic stroke in both human and animal data. Yet, a wave of biology-dependent and imaging-dependent issues is still untackled pointing towards the key question: “how does an ischemic stroke evolve?” Paving the way for potential answers to this question, both magnetic resonance (MRI) and CT (computed tomography) images have been used to visualize the lesion extent, either with or without spatial distinction between dead and salvageable tissue. Combining diffusion and perfusion imaging modalities may provide the possibility of predicting further tissue recovery or eventual necrosis. Going beyond these basic thresholding techniques, in this critical appraisal, we explore different semi-automatic or fully automatic 2D/3D medical image analysis methods and mathematical models applied to human, animal (rats/rodents) and/or synthetic ischemic stroke to tackle one of the following three problems: (1) segmentation of infarcted and/or salvageable (also called penumbral) tissue, (2) prediction of final ischemic tissue fate (death or recovery) and (3) dynamic simulation of the lesion core and/or penumbra evolution. To highlight the key features in the reviewed segmentation and prediction methods, we propose a common categorization pattern. We also emphasize some key aspects of the methods such as the imaging modalities required to build and test the presented approach, the number of patients/animals or synthetic samples, the use of external user interaction and the methods of assessment (clinical or imaging-based). Furthermore, we investigate how any key difficulties, posed by the evolution of stroke such as swelling or reperfusion, were detected (or not) by each method. In the absence of any imaging-based macroscopic dynamic model

  18. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Science.gov (United States)

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  19. Critical Features of Fragment Libraries for Protein Structure Prediction

    Science.gov (United States)

    dos Santos, Karina Baptista

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction. PMID:28085928

  20. Copeptin Predicts Mortality in Critically Ill Patients

    Science.gov (United States)

    Krychtiuk, Konstantin A.; Honeder, Maria C.; Lenz, Max; Maurer, Gerald; Wojta, Johann; Heinz, Gottfried; Huber, Kurt; Speidl, Walter S.

    2017-01-01

    Background Critically ill patients admitted to a medical intensive care unit exhibit a high mortality rate irrespective of the cause of admission. Besides its role in fluid and electrolyte balance, vasopressin has been described as a stress hormone. Copeptin, the C-terminal portion of provasopressin mirrors vasopressin levels and has been described as a reliable biomarker for the individual’s stress level and was associated with outcome in various disease entities. The aim of this study was to analyze whether circulating levels of copeptin at ICU admission are associated with 30-day mortality. Methods In this single-center prospective observational study including 225 consecutive patients admitted to a tertiary medical ICU at a university hospital, blood was taken at ICU admission and copeptin levels were measured using a commercially available automated sandwich immunofluorescent assay. Results Median acute physiology and chronic health evaluation II score was 20 and 30-day mortality was 25%. Median copeptin admission levels were significantly higher in non-survivors as compared with survivors (77.6 IQR 30.7–179.3 pmol/L versus 45.6 IQR 19.6–109.6 pmol/L; p = 0.025). Patients with serum levels of copeptin in the third tertile at admission had a 2.4-fold (95% CI 1.2–4.6; p = 0.01) increased mortality risk as compared to patients in the first tertile. When analyzing patients according to cause of admission, copeptin was only predictive of 30-day mortality in patients admitted due to medical causes as opposed to those admitted after cardiac surgery, as medical patients with levels of copeptin in the highest tertile had a 3.3-fold (95% CI 1.66.8, p = 0.002) risk of dying independent from APACHE II score, primary diagnosis, vasopressor use and need for mechanical ventilation. Conclusion Circulating levels of copeptin at ICU admission independently predict 30-day mortality in patients admitted to a medical ICU. PMID:28118414

  1. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  2. Critical review of prostate cancer predictive tools.

    Science.gov (United States)

    Shariat, Shahrokh F; Kattan, Michael W; Vickers, Andrew J; Karakiewicz, Pierre I; Scardino, Peter T

    2009-12-01

    Prostate cancer is a very complex disease, and the decision-making process requires the clinician to balance clinical benefits, life expectancy, comorbidities and potential treatment-related side effects. Accurate prediction of clinical outcomes may help in the difficult process of making decisions related to prostate cancer. In this review, we discuss attributes of predictive tools and systematically review those available for prostate cancer. Types of tools include probability formulas, look-up and propensity scoring tables, risk-class stratification prediction tools, classification and regression tree analysis, nomograms and artificial neural networks. Criteria to evaluate tools include discrimination, calibration, generalizability, level of complexity, decision analysis and ability to account for competing risks and conditional probabilities. The available predictive tools and their features, with a focus on nomograms, are described. While some tools are well-calibrated, few have been externally validated or directly compared with other tools. In addition, the clinical consequences of applying predictive tools need thorough assessment. Nevertheless, predictive tools can facilitate medical decision-making by showing patients tailored predictions of their outcomes with various alternatives. Additionally, accurate tools may improve clinical trial design.

  3. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  4. A novel modeling to predict the critical current behavior of Nb3Sn PIT strand under transverse load based on a scaling law and Finite Element Analysis

    CERN Document Server

    Wang, Tiening; Takayasu, Makoto; Bordini, Bernardo

    2014-01-01

    Superconducting Nb3Sn Powder-In-Tube (PIT) strands could be used for the superconducting magnets of the next generation Large Hadron Collider. The strands are cabled into the typical flat Rutherford cable configuration. During the assembly of a magnet and its operation the strands experience not only longitudinal but also transverse load due to the pre-compression applied during the assembly and the Lorentz load felt when the magnets are energized. To properly design the magnets and guarantee their safe operation, mechanical load effects on the strand superconducting properties are studied extensively; particularly, many scaling laws based on tensile load experiments have been established to predict the critical current dependence on strain. However, the dependence of the superconducting properties on transverse load has not been extensively studied so far. One of the reasons is that transverse loading experiments are difficult to conduct due to the small diameter of the strand (about 1 mm) and the data curre...

  5. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  6. The critical thinking curriculum model

    Science.gov (United States)

    Robertson, William Haviland

    The Critical Thinking Curriculum Model (CTCM) utilizes a multidisciplinary approach that integrates effective learning and teaching practices with computer technology. The model is designed to be flexible within a curriculum, an example for teachers to follow, where they can plug in their own critical issue. This process engages students in collaborative research that can be shared in the classroom, across the country or around the globe. The CTCM features open-ended and collaborative activities that deal with current, real world issues which leaders are attempting to solve. As implemented in the Critical Issues Forum (CIF), an educational program administered by Los Alamos National Laboratory (LANL), the CTCM encompasses the political, social/cultural, economic, and scientific realms in the context of a current global issue. In this way, students realize the importance of their schooling by applying their efforts to an endeavor that ultimately will affect their future. This study measures student attitudes toward science and technology and the changes that result from immersion in the CTCM. It also assesses the differences in student learning in science content and problem solving for students involved in the CTCM. A sample of 24 students participated in classrooms at two separate high schools in New Mexico. The evaluation results were analyzed using SPSS in a MANOVA format in order to determine the significance of the between and within-subjects effects. A comparison ANOVA was done for each two-way MANOVA to see if the comparison groups were equal. Significant findings were validated using the Scheffe test in a Post Hoc analysis. Demographic information for the sample population was recorded and tracked, including self-assessments of computer use and availability. Overall, the results indicated that the CTCM did help to increase science content understanding and problem-solving skills for students, thereby positively effecting critical thinking. No matter if the

  7. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  8. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  9. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  10. Multiaxial fatigue crack path prediction using critical plane concept

    Directory of Open Access Journals (Sweden)

    Jafar Albinmousa

    2016-01-01

    Full Text Available Prediction of fatigue crack orientation can be an essential step for estimating fatigue crack path. Critical plane concept is widely used due to its physical basis that fatigue failure is associated with certain plane(s. However, recent investigations suggest that critical plane concept might need revision. In this paper, fatigue experiments that involve careful measurement of fatigue crack were reviewed. Predictions of fatigue crack orientation using critical plane concept were examined. Projected length and angle were used to characterize fatigue crack. Considering the entire fatigue life, this average representation suggests that it is more reasonable to assume the plane of maximum normal strain as the critical plane even though fundamentally the plane of maximum shear strain is more likely to be the critical one at early initiation stage.

  11. Critical Zone Observatories (CZOs): Integrating measurements and models of Earth surface processes to improve prediction of landscape structure, function and evolution

    Science.gov (United States)

    Chorover, J.; Anderson, S. P.; Bales, R. C.; Duffy, C.; Scatena, F. N.; Sparks, D. L.; White, T.

    2012-12-01

    The "Critical Zone" - that portion of Earth's land surface that extends from the outer periphery of the vegetation canopy to the lower limit of circulating groundwater - has evolved in response to climatic and tectonic forcing throughout Earth's history, but human activities have recently emerged as a major agent of change as well. With funding from NSF, a network of currently six CZOs is being developed in the U.S. to provide infrastructure, data and models that facilitate understanding the evolution, structure, and function of this zone at watershed to grain scales. Each CZO is motivated by a unique set of hypotheses proposed by a specific investigator team, but coordination of cross-site activities is also leading to integration of a common set of multi-disciplinary tools and approaches for cross-site syntheses. The resulting harmonized four-dimensional datasets are intended to facilitate community-wide exploration of process couplings among hydrology, ecology, soil science, geochemistry and geomorphology across the larger (network-scale) parameter space. Such an approach enables testing of the generalizability of findings at a given site, and also of emergent hypotheses conceived independently of an original CZO investigator team. This two-pronged method for developing a network of individual CZOs across a range of watershed systems is now yielding novel observations and models that resolve mechanisms for Critical Zone change occurring on geological to hydrologic time-scales. For example, recent advances include improved understanding of (i) how mass and energy flux as modulated by ecosystem exchange transforms bedrock to structured, soil-mantled and/or erosive landscapes; (ii) how long-term evolution of landscape structure affects event-based hydrologic and biogeochemical response at pore to catchment scales; (iii) how complementary isotopic measurements can be used to resolve pathways and time scales of water and solute transport from canopy to stream, and

  12. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  13. A method for predicting critical load evaluating adhesion of coatings in scratch testing

    Institute of Scientific and Technical Information of China (English)

    CHEN Xi-fang(陈溪芳); YAN Mi(严密); YANG De-ren(杨德人); HIROSE Yukio

    2003-01-01

    In this paper based on the experiment principle of evaluating adhesion property by scratch testing, the peeling mechanism of thin films is discussed by applying contact theory and surface physics theory. A mathematical model predicting the critical load is proposed for calculating critical load as determined by scratch testing. The factors for correctly evaluating adhesion of coatings according to the experimental data are discussed.

  14. Predictions of the marviken subcooled critical mass flux using the critical flow scaling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Park, Choon Kyung; Chun, Se Young; Cho, Seok; Yang, Sun Ku; Chung, Moon Ki [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A total of 386 critical flow data points from 19 runs of 27 runs in the Marviken Test were selected and compared with the predictions by the correlations based on the critical flow scaling parameters. The results show that the critical mass flux in the very large diameter pipe can be also characterized by two scaling parameters such as discharge coefficient and dimensionless subcooling (C{sub d,ref} and {Delta}{Tau}{sup *} {sub sub}). The agreement between the measured data and the predictions are excellent. 8 refs., 8 figs. 1 tab. (Author)

  15. Accuracy of critical-temperature sensitivity coefficients predicted by multilayered composite plate theories

    Science.gov (United States)

    Noor, Ahmed K.; Burton, Scott

    1992-01-01

    An assessment is made of the accuracy of the critical-temperature sensitivity coefficients of multilayered plates predicted by different modeling approaches, based on two-dimensional shear-deformation theories. The sensitivity coefficients considered measure the sensitivity of the critical temperatures to variations in different lamination and material parameters of the plate. The standard of comparison is taken to be the sensitivity coefficients obtained by the three-dimensional theory of thermoelasticity. Numerical studies are presented showing the effects of variation in the geometric and lamination parameters of the plate on the accuracy of both the sensitivity coefficients and the critical temperatures predicted by the different modeling approaches.

  16. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  17. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  18. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  19. In Eating-Disordered Inpatient Adolescents, Self-Criticism Predicts Nonsuicidal Self-Injury.

    Science.gov (United States)

    Itzhaky, Liat; Shahar, Golan; Stein, Daniel; Fennig, Silvana

    2016-08-01

    We examined the role of depressive traits-self-criticism and dependency-in nonsuicidal self-injury (NSSI) and suicidal ideation among inpatient adolescents with eating disorders. In two studies (N = 103 and 55), inpatients were assessed for depressive traits, suicidal ideations, and NSSI. In Study 2, motivation for carrying out NSSI was also assessed. In both studies, depression predicted suicidal ideation and self-criticism predicted NSSI. In Study 2, depression and suicidal ideation also predicted NSSI. The automatic positive motivation for NSSI was predicted by dependency and depressive symptoms, and by a two-way interaction between self-criticism and dependency. Consistent with the "self-punishment model," self-criticism appears to constitute a dimension of vulnerability for NSSI.

  20. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  1. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  2. Competition-Induced Criticality in a Model of Meme Popularity

    Science.gov (United States)

    Gleeson, James P.; Ward, Jonathan A.; O'Sullivan, Kevin P.; Lee, William T.

    2014-01-01

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α <2, unlike preferential-attachment models), similar to those seen in empirical data.

  3. Competition-induced criticality in a model of meme popularity.

    Science.gov (United States)

    Gleeson, James P; Ward, Jonathan A; O'Sullivan, Kevin P; Lee, William T

    2014-01-31

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent α<2, unlike preferential-attachment models), similar to those seen in empirical data.

  4. Competition-induced criticality in a model of meme popularity

    CERN Document Server

    Gleeson, James P; O'Sullivan, Kevin P; Lee, William T

    2013-01-01

    Heavy-tailed distributions of meme popularity occur naturally in a model of meme diffusion on social networks. Competition between multiple memes for the limited resource of user attention is identified as the mechanism that poises the system at criticality. The popularity growth of each meme is described by a critical branching process, and asymptotic analysis predicts power-law distributions of popularity with very heavy tails (exponent $\\alpha<2$, unlike preferential-attachment models), similar to those seen in empirical data.

  5. Critical behavior of a dynamical percolation model

    Institute of Scientific and Technical Information of China (English)

    YU Mei-Ling; XU Ming-Mei; LIU Zheng-You; LIU Lian-Shou

    2009-01-01

    The critical behavior of the dynamical percolation model, which realizes the molecular-aggregation conception and describes the crossover between the hadronic phase and the partonic phase, is studied in detail. The critical percolation distance for this model is obtained by using the probability P∞ of the appearance of an infinite cluster. Utilizing the finite-size scaling method the critical exponents γ/v and T are extracted from the distribution of the average cluster size and cluster number density. The influences of two model related factors, I.e. The maximum bond number and the definition of the infinite cluster, on the critical behavior are found to be small.

  6. Developing neuronal networks: self-organized criticality predicts the future.

    Science.gov (United States)

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and "aging" process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  7. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  8. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  9. Safety-Critical Java on a Time-predictable Processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  10. Critical shoulder angle combined with age predict five shoulder pathologies: a retrospective analysis of 1000 cases.

    Science.gov (United States)

    Heuberer, Philipp R; Plachel, Fabian; Willinger, Lukas; Moroder, Philipp; Laky, Brenda; Pauzenberger, Leo; Lomoschitz, Fritz; Anderl, Werner

    2017-06-15

    Acromial morphology has previously been defined as a risk factor for some shoulder pathologies. Yet, study results are inconclusive and not all major shoulder diseases have been sufficiently investigated. Thus, the aim of the present study was to analyze predictive value of three radiological parameters including the critical shoulder angle, acromion index, and lateral acromion angle in relationship to symptomatic patients with either cuff tear arthropathy, glenohumeral osteoarthritis, rotator cuff tear, impingement, and tendinitis calcarea. A total of 1000 patients' standardized true-anteroposterior radiographs were retrospectively assessed. Receiver-operating curve analyses and multinomial logistic regression were used to examine the association between shoulder pathologies and acromion morphology. The prediction model was derived from a development cohort and applied to a validation cohort. Prediction model's performance was statistically evaluated. The majority of radiological measurements were significantly different between shoulder pathologies, but the critical shoulder angle was an overall better parameter to predict and distinguish between the different pathologies than the acromion index or lateral acromion angle. Typical critical shoulder angle-age patterns for the different shoulder pathologies could be detected. Patients diagnosed with rotator cuff tears had the highest, whereas patients with osteoarthritis had the lowest critical shoulder angle. The youngest patients were in the tendinitis calcarea and the oldest in the cuff tear arthropathy group. The present study showed that critical shoulder angle and age, two easily assessable variables, adequately predict different shoulder pathologies in patients with shoulder complaints.

  11. Critical Review of Membrane Bioreactor Models

    DEFF Research Database (Denmark)

    Naessens, W.; Maere, T.; Ratkovich, Nicolas Rios;

    2012-01-01

    modelling. In this paper, the vast literature on hydrodynamic and integrated modelling in MBR is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones...

  12. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  13. Critical state model with anisotropic critical current density

    CERN Document Server

    Bhagwat, K V; Ravikumar, G

    2003-01-01

    Analytical solutions of Bean's critical state model with critical current density J sub c being anisotropic are obtained for superconducting cylindrical samples of arbitrary cross section in a parallel geometry. We present a method for calculating the flux fronts and magnetization curves. Results are presented for cylinders with elliptical cross section with a specific form of the anisotropy. We find that over a certain range of the anisotropy parameter the flux fronts have shapes similar to those for an isotropic sample. However, in general, the presence of anisotropy significantly modifies the shape of the flux fronts. The field for full flux penetration also depends on the anisotropy parameter. The method is extended to the case of anisotropic J sub c that also depends on the local field B, and magnetization hysteresis curves are presented for typical values of the anisotropy parameter for the case of |J sub c | that decreases exponentially with |B|.

  14. Predictive equations for energy needs for the critically ill.

    Science.gov (United States)

    Walker, Renee N; Heuberger, Roschelle A

    2009-04-01

    Nutrition may affect clinical outcomes in critically ill patients, and providing either more or fewer calories than the patient needs can adversely affect outcomes. Calorie need fluctuates substantially over the course of critical illness, and nutrition delivery is often influenced by: the risk of refeeding syndrome; a hypocaloric feeding regimen; lack of feeding access; intolerance of feeding; and feeding-delay for procedures. Lean body mass is the strongest determinant of resting energy expenditure, but age, sex, medications, and metabolic stress also influence the calorie requirement. Indirect calorimetry is the accepted standard for determining calorie requirement, but is unavailable or unaffordable in many centers. Moreover, indirect calorimetry is not infallible and care must be taken when interpreting the results. In the absence of calorimetry, clinicians use equations and clinical judgment to estimate calorie need. We reviewed 7 equations (American College of Chest Physicians, Harris-Benedict, Ireton-Jones 1992 and 1997, Penn State 1998 and 2003, Swinamer 1990) and their prediction accuracy. Understanding an equation's reference population and using the equation with similar patients are essential for the equation to perform similarly. Prediction accuracy among equations is rarely within 10% of the measured energy expenditure; however, in the absence of indirect calorimetry, a prediction equation is the best alternative.

  15. The Interactive Effects of Drinking Motives, Age, and Self-Criticism in Predicting Hazardous Drinking.

    Science.gov (United States)

    Skinner, Kayla D; Veilleux, Jennifer C

    2016-08-23

    Individuals who disclose hazardous drinking often report strong motives to drink, which may occur to modulate views of the self. Investigating self-criticism tendencies in models of drinking motives may help explain who is more susceptible to drinking for internal or external reasons. As much of the research on drinking motives and alcohol use is conducted in young adult or college student samples, studying these relations in a wider age range is clearly needed. The current study examined the interactive relationship between drinking motives (internal: coping, enhancement; external: social, conformity), levels of self-criticism (internalized, comparative), and age to predict hazardous drinking. Participants (N = 427, Mage = 34.16, 54.8% female) who endorsed drinking within the last year completed an online study assessing these constructs. Results indicated internalized self-criticism and drinking to cope interacted to predict hazardous drinking for middle-aged adults. However, comparative self-criticism and conformity motives interacted to predict greater hazardous drinking for younger-aged adults. In addition, both social and conformity motives predicted less hazardous drinking for middle-aged adults high in comparative self-criticism. Interventions that target alcohol use could minimize coping motivations to drink while targeting comparative self-criticism in the context of social, and conformity motives.

  16. Real time monitoring of reticle etch process tool to investigate and predict critical dimension performance

    Science.gov (United States)

    Deming, Rick; Yung, Karmen; Guglielmana, Mark; Bald, Dan; Baik, Kiho; Abboud, Frank

    2007-03-01

    As mask pattern feature sizes shrink the need for tighter control of factors affecting critical dimensions (CD) increases at all steps in the mask manufacturing process. To support this requirement Intel Mask Operation is expanding its process and equipment monitoring capability. We intend to better understand the factors affecting the process and enhance our ability to predict reticle health and critical dimension performance. This paper describes a methodology by which one can predict the contribution of the dry etch process equipment to overall CD performance. We describe the architecture used to collect critical process related information from various sources both internal and external to the process equipment and environment. In addition we discuss the method used to assess the significance of each parameter and to construct the statistical model used to generate the predictions. We further discuss the methodology used to turn this model into a functioning real time prediction of critical dimension performance. Further, these predictions will be used to modify the manufacturing decision support system to provide early detection for process excursion.

  17. Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.

    Science.gov (United States)

    Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C

    2017-03-01

    Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p distances of 5,000 m.

  18. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  19. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  20. Covariance matrices for use in criticality safety predictability studies

    Energy Technology Data Exchange (ETDEWEB)

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  1. Recognizing and predicting movement effects: identifying critical movement features.

    Science.gov (United States)

    Cañal-Bruland, Rouwen; Williams, A Mark

    2010-01-01

    It is not clear whether the critical features used to discriminate movements are identical to those involved in predicting the same movement's effects and consequently, whether the mechanisms underlying recognition and anticipation differ. We examined whether people rely on different kinematic information when required to recognize differences in the movement pattern in comparison to when they have to anticipate the outcome of these same movements. Naïve participants were presented with paired presentations of point-light animated tennis shots that ended at racket-ball contact. We instructed them either to judge whether the movements observed were the same or different or to predict shot direction (left vs. right). In addition, we locally manipulated the kinematics of point-light figures in an effort to identify the critical features used when making recognition and anticipation judgments. It appears that observers rely on different sources of information when required to recognize movement differences compared to when they need to anticipate the outcome of the same observed movements. Findings are discussed with reference to recent ideas focusing on the role of perceptual and motor resonance in perceptual judgments.

  2. Can Student Nurse Critical Thinking Be Predicted from Perceptions of Structural Empowerment within the Undergraduate, Pre-Licensure Learning Environment?

    Science.gov (United States)

    Caswell-Moore, Shelley P.

    2013-01-01

    The purpose of this study was to test a model using Rosabeth Kanter's theory (1977; 1993) of structural empowerment to determine if this model can predict student nurses' level of critical thinking. Major goals of nursing education are to cultivate graduates who can think critically with a keen sense of clinical judgment, and who can perform…

  3. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zheng-Yong Yu

    2017-05-01

    Full Text Available As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB and Fatemi-Socie (FS models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  5. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  6. Critical gradient response of the Weiland model

    Science.gov (United States)

    Asp, E.; Weiland, J.; Garbet, X.; Parail, V.; Strand, P.; JET EFDA contributors, the

    2007-08-01

    The success the Weiland model has had in reproducing modulation experiments prompted this in-depth investigation into its behaviour as a critical gradient model (CGM). The critical gradient properties of the Weiland model is examined analytically and numerically and compared with the empirical CGM commonly used in experiment. A simplified Weiland CGM is derived in which the height-above-threshold dependence is not necessarily linear. Simultaneously, the validity of the empirical CGM was examined. It is shown that an effective threshold, which is higher than the instability threshold, can be obtained if pinches influence the diffusivity.

  7. A critical review of clarifier modelling

    DEFF Research Database (Denmark)

    Plósz, Benedek; Nopens, Ingmar; Rieger, Leiv;

    This outline paper aims to provide a critical review of secondary settling tank (SST) modelling approaches used in current wastewater engineering and develop tools not yet applied in practice. We address the development of different tier models and experimental techniques in the field with a part...

  8. Can the theory of critical distances predict the failure of shape memory alloys?

    Science.gov (United States)

    Kasiri, Saeid; Kelly, Daniel J; Taylor, David

    2011-06-01

    Components made from shape memory alloys (SMAs) such as nitinol often fail from stress concentrations and defects such as notches and cracks. It is shown here for the first time that these failures can be predicted using the theory of critical distances (TCDs), a method which has previously been used to study fracture and fatigue in other materials. The TCD uses the stress at a certain distance ahead of the notch to predict the failure of the material due to the stress concentration. The critical distance is believed to be a material property which is related to the microstructure of the material. The TCD is simply applied to a linear model of the material without the need to model the complication of its non-linear behaviour. The non-linear behaviour of the material at fracture is represented in the critical stress. The effect of notches and short cracks on the fracture of SMA NiTi was studied by analysing experimental data from the literature. Using a finite element model with elastic material behaviour, it is shown that the TCD can predict the effect of crack length and notch geometry on the critical stress and stress intensity for fracture, with prediction errors of less than 5%. The value of the critical distance obtained for this material was L = 90 μm; this may be related to its grain size. The effects of short cracks on stress intensity were studied. It was shown that the same value of the critical distance (L = 90 μm) could estimate the experimental data for both notches and short cracks.

  9. On the criticality of inferred models

    CERN Document Server

    Mastromatteo, Iacopo

    2011-01-01

    Advanced inference techniques allow one to reconstruct the pattern of interaction from high dimensional data sets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to a phase transition. On one side, we show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher Information) is directly related to the model's susceptibility. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. On the other, this region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time-scales naturally yield models which are close to criticality.

  10. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  11. Off-Critical Logarithmic Minimal Models

    CERN Document Server

    Pearce, Paul A

    2012-01-01

    We consider the integrable minimal models ${\\cal M}(m,m';t)$, corresponding to the $\\varphi_{1,3}$ perturbation off-criticality, in the {\\it logarithmic limit\\,} $m, m'\\to\\infty$, $m/m'\\to p/p'$ where $p, p'$ are coprime and the limit is taken through coprime values of $m,m'$. We view these off-critical minimal models ${\\cal M}(m,m';t)$ as the continuum scaling limit of the Forrester-Baxter Restricted Solid-On-Solid (RSOS) models on the square lattice. Applying Corner Transfer Matrices to the Forrester-Baxter RSOS models in Regime III, we argue that taking first the thermodynamic limit and second the {\\it logarithmic limit\\,} yields off-critical logarithmic minimal models ${\\cal LM}(p,p';t)$ corresponding to the $\\varphi_{1,3}$ perturbation of the critical logarithmic minimal models ${\\cal LM}(p,p')$. Specifically, in accord with the Kyoto correspondence principle, we show that the logarithmic limit of the one-dimensional configurational sums yields finitized quasi-rational characters of the Kac representatio...

  12. Prediction of Critical Heat Flux for Saturated Flow Boiling Water in Vertical Narrow Rectangular Channels

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Gil Sik; Jeong, Yong Hun [KAIST, Daejeon (Korea, Republic of); Chang, Soon Heung [Handong Univ., Pohang (Korea, Republic of)

    2015-12-15

    There is an increasing need to understand the thermal-hydraulic phenomena, including the critical heat flux (CHF), in narrow rectangular channels and consider these in system design. The CHF mechanism under a saturated flow boiling condition involves the depletion of the liquid film of an annular flow. To predict this type of CHF, the previous representative liquid film dryout models (LFD models) were studied, and their shortcomings were reviewed, including the assumption that void fraction or quality is constant at the boundary condition for the onset of annular flow (OAF). A new LFD model was proposed based on the recent constitutive correlations for the droplet deposition rate and entrainment rate. In addition, this LFD model was applied to predict the CHF in vertical narrow rectangular channels that were uniformly heated. The predicted CHF showed good agreement with 284 pieces of experimental data, with a mean absolute error of 18. 1 % and root mean square error of 22.9 %.

  13. 基于临界面法的剪切式多轴疲劳寿命预测模型%A MULTIAXIAL FATIGUE LIFE PREDICTION MODEL WITH SHEAR FORM BASED ON THE CRITICAL PLANE APPROACH

    Institute of Scientific and Technical Information of China (English)

    刘嘉; 李静; 张忠平

    2012-01-01

    Based on the critical plane approach,a new multiaxial fatigue damage parameter with shear form is proposed by means of the von-Mises criterion. This proposed damage parameter is suitable for both proportional and non-proportional loading. Besides, this damage parameter considers the maximum shear strain range and the normal strain range on the critical plane. The effect of the non-proportional cyclic hardening on the fatigue life is taken into account by an introduced stress-correlated factor. It is convenient for engineering application because of no empirical constants in this parameter. The predicted multiaxial fatigue lives of the considered materials(1045HR Steel,S45C Steel,Inconel718 Steel and 16MnR Steel) using the proposed model are found in good agreement with the experimental results.%基于临界平面原理,应用von-Mises准则提出一种能够同时适用于比例与非比例加载的剪切式多轴疲劳损伤参量.新的损伤参量,通过引入一个应力相关因子来考虑临界面上最大剪应变范围和正应变范围对多轴疲劳损伤贡献的不同,同时该因子还考虑了非比例附加强化对材料多轴疲劳寿命的影响.该参量不含有经验常数,便于工程应用.经1045HR钢,S45C钢,Inconel718钢,16MnR钢等四种材料的多轴疲劳试验验证,预测结果与试验结果吻合较好.

  14. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    Science.gov (United States)

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  15. Critical assessment of methods of protein structure prediction (CASP)-round IX

    KAUST Repository

    Moult, John

    2011-01-01

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the ninth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Methods for modeling protein structure continue to advance, although at a more modest pace than in the early CASP experiments. CASP developments of note are indications of improvement in model accuracy for some classes of target, an improved ability to choose the most accurate of a set of generated models, and evidence of improvement in accuracy for short "new fold" models. In addition, a new analysis of regions of models not derivable from the most obvious template structure has revealed better performance than expected.

  16. 一种基于临界平面法的多轴疲劳寿命预测模型∗%Multiaxial Fatigue Life Prediction Model Based on Critical Plane Approach

    Institute of Scientific and Technical Information of China (English)

    周维; 刘义伦; 李松柏; 杨大炼; 陶洁

    2015-01-01

    在多轴交变应力作用下,由于非比例循环附加强化效应导致疲劳寿命降低。针对这一问题,以薄壁圆管疲劳试件为研究对象,在分析临界平面上剪应变和正应变随相位角变化特征的基础上,引入了一个新的有效循环变量———临界平面上的等效应力,提出了一种新的多轴疲劳预测模型。新的损伤参量不含经验常数,便于工程实际的运用。通过和铝合金7075-T651多轴疲劳实验数据比较,结果表明,所提出的多轴寿命预测模型具有更好的预测精度,适用于比例与非比例加载条件。%The cyclic hardening by non-proportional loading will reduce the fatigue life under multiaxial cyclic stress.In order to solve this problem,a new damage parameter for multi-axial fatigue was proposed by introducing a new effective loop variable-the equivalent stress of the critical plane,based on the analysis for the state of the shear strain and strain changing with phase angle characteristics on the critical plane of the thin-walled cylindrical specimen. It is convenient for engineering application because of its non-material constants in this parameter.Compared with the data from aluminum alloy 7075-T65 1 multiaxial fatigue experiment,the results show that multiaxial life prediction model has better prediction precision,suitable for proportional and non-proportional loading conditions.

  17. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  18. Lung Injury Prediction Score Is Useful in Predicting Acute Respiratory Distress Syndrome and Mortality in Surgical Critical Care Patients

    Directory of Open Access Journals (Sweden)

    Zachary M. Bauman

    2015-01-01

    Full Text Available Background. Lung injury prediction score (LIPS is valuable for early recognition of ventilated patients at high risk for developing acute respiratory distress syndrome (ARDS. This study analyzes the value of LIPS in predicting ARDS and mortality among ventilated surgical patients. Methods. IRB approved, prospective observational study including all ventilated patients admitted to the surgical intensive care unit at a single tertiary center over 6 months. ARDS was defined using the Berlin criteria. LIPS were calculated for all patients and analyzed. Logistic regression models evaluated the ability of LIPS to predict development of ARDS and mortality. A receiver operator characteristic (ROC curve demonstrated the optimal LIPS value to statistically predict development of ARDS. Results. 268 ventilated patients were observed; 141 developed ARDS and 127 did not. The average LIPS for patients who developed ARDS was 8.8±2.8 versus 5.4±2.8 for those who did not (p<0.001. An ROC area under the curve of 0.79 demonstrates LIPS is statistically powerful for predicting ARDS development. Furthermore, for every 1-unit increase in LIPS, the odds of developing ARDS increase by 1.50 (p<0.001 and odds of ICU mortality increase by 1.22 (p<0.001. Conclusion. LIPS is reliable for predicting development of ARDS and predicting mortality in critically ill surgical patients.

  19. A Novel Method for the Prediction of Critical Inclusion Size Leading to Fatigue Failure

    Science.gov (United States)

    Saberifar, S.; Mashreghi, A. R.

    2012-06-01

    The fatigue behavior of two commercial 30MnVS6 steels with similar microstructure and mechanical properties containing inclusions of different sizes were studied in the 107 cycles fatigue regime. The scanning electron microscopy (SEM) investigations of the fracture surfaces revealed that the nonmetallic inclusions are the main sources of fatigue crack initiation. Calculated according to the Murakami's model, the stress intensity factors were found to be suitable for the assessment of fatigue behavior. In this article, a new method is proposed for the prediction of the critical inclusion size, using Murakami's model. According to this method, a critical stress intensity factor was determined for the estimation of the critical inclusion size causing the fatigue failure.

  20. A Holographic Model For Quantum Critical Responses

    CERN Document Server

    Myers, Robert C; Witczak-Krempa, William

    2016-01-01

    We analyze the dynamical response functions of strongly interacting quantum critical states described by conformal field theories (CFTs). We construct a self-consistent holographic model that incorporates the relevant scalar operator driving the quantum critical phase transition. Focusing on the finite temperature dynamical conductivity $\\sigma(\\omega,T)$, we study its dependence on our model parameters, notably the scaling dimension of the relevant operator. It is found that the conductivity is well-approximated by a simple ansatz proposed by Katz et al [1] for a wide range of parameters. We further dissect the conductivity at large frequencies $\\omega >> T$ using the operator product expansion, and show how it reveals the spectrum of our model CFT. Our results provide a physically-constrained framework to study the analytic continuation of quantum Monte Carlo data, as we illustrate using the O(2) Wilson-Fisher CFT. Finally, we comment on the variation of the conductivity as we tune away from the quantum cri...

  1. Perceived criticism and marital adjustment predict depressive symptoms in a community sample.

    Science.gov (United States)

    Peterson-Post, Kristina M; Rhoades, Galena K; Stanley, Scott M; Markman, Howard J

    2014-07-01

    Depressive symptoms are related to a host of negative individual and family outcomes; therefore, it is important to establish risk factors for depressive symptoms to design prevention efforts. Following studies in the marital and psychiatric literatures regarding marital factors associated with depression, we tested two potential predictors of depressive symptoms: marital adjustment and perceived spousal criticism. We assessed 249 spouses from 132 married couples from the community during their first year of marriage and at three time points over the next 10 years. Initial marital adjustment significantly predicted depressive symptoms for husbands and wives at all follow-ups. Further, perceived criticism significantly predicted depressive symptoms at the 5- and 10-year follow-ups. However, at the 1-year follow-up, this association was significant for men but not for women. Finally, a model where the contributions of marital adjustment and perceived criticism were tested together suggested that both play independent roles in predicting future depressive symptoms. These findings highlight the potential importance of increasing marital adjustment and reducing perceived criticism at the outset of marriage as a way to reduce depressive symptoms during the course of marriage.

  2. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  3. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  4. The Grand Tack model: a critical review

    CERN Document Server

    Raymond, Sean N

    2014-01-01

    The `Grand Tack' model proposes that the inner Solar System was sculpted by the giant planets' orbital migration in the gaseous protoplanetary disk. Jupiter first migrated inward then Jupiter and Saturn migrated back outward together. If Jupiter's turnaround or "tack" point was at ~1.5 AU the inner disk of terrestrial building blocks would have been truncated at ~1 AU, naturally producing the terrestrial planets' masses and spacing. During the gas giants' migration the asteroid belt is severely depleted but repopulated by distinct planetesimal reservoirs that can be associated with the present-day S and C types. The giant planets' orbits are consistent with the later evolution of the outer Solar System. Here we confront common criticisms of the Grand Tack model. We show that some uncertainties remain regarding the Tack mechanism itself; the most critical unknown is the timing and rate of gas accretion onto Saturn and Jupiter. Current isotopic and compositional measurements of Solar System bodies -- including ...

  5. QCD Critical Point in a Quasiparticle Model

    CERN Document Server

    Srivastava, P K; Singh, C P

    2010-01-01

    Recent theoretical investigations have unveiled a rich structure in the quantum chromodynamics (QCD) phase diagram which consists of quark gluon plasma (QGP) and the hadronic phases but also supports the existence of a cross-over transition ending at a critical end point (CEP). We find a too large variation in determination of the coordinates of the CEP in the temperature (T), baryon chemical potential ($\\mu_{B}$) plane and, therefore, its identification in the current heavy-ion experiments becomes debatable. Here we use an equation of state (EOS) for a deconfined QGP using a thermodynamically consistent quasiparticle model involving quarks and gluons having thermal masses. We further use a thermodynamically consistent excluded volume model for the hadron gas (HG) which was recently proposed by us. Using these equations of state, a first order deconfining phase transition is constructed using Gibbs' criteria. This leads to an interesting finding that the phase transition line ends at a critical point (CEP) be...

  6. Prediction of aqueous solubility, vapor pressure and critical micelle concentration for aquatic partitioning of perfluorinated chemicals.

    Science.gov (United States)

    Bhhatarai, Barun; Gramatica, Paola

    2011-10-01

    The majority of perfluorinated chemicals (PFCs) are of increasing risk to biota and environment due to their physicochemical stability, wide transport in the environment and difficulty in biodegradation. It is necessary to identify and prioritize these harmful PFCs and to characterize their physicochemical properties that govern the solubility, distribution and fate of these chemicals in an aquatic ecosystem. Therefore, available experimental data (10-35 compounds) of three important properties: aqueous solubility (AqS), vapor pressure (VP) and critical micelle concentration (CMC) on per- and polyfluorinated compounds were collected for quantitative structure-property relationship (QSPR) modeling. Simple and robust models based on theoretical molecular descriptors were developed and externally validated for predictivity. Model predictions on selected PFCs were compared with available experimental data and other published in silico predictions. The structural applicability domains (AD) of the models were verified on a bigger data set of 221 compounds. The predicted properties of the chemicals that are within the AD, are reliable, and they help to reduce the wide data gap that exists. Moreover, the predictions of AqS, VP, and CMC of most common PFCs were evaluated to understand the aquatic partitioning and to derive a relation with the available experimental data of bioconcentration factor (BCF).

  7. A Mechanistic Approach for the Prediction of Critical Power in BWR Fuel Bundles

    Science.gov (United States)

    Chandraker, Dinesh Kumar; Vijayan, Pallipattu Krishnan; Sinha, Ratan Kumar; Aritomi, Masanori

    The critical power corresponding to the Critical Heat Flux (CHF) or dryout condition is an important design parameter for the evaluation of safety margins in a nuclear fuel bundle. The empirical approaches for the prediction of CHF in a rod bundle are highly geometric specific and proprietary in nature. The critical power experiments are very expensive and technically challenging owing to the stringent simulation requirements for the rod bundle tests involving radial and axial power profiles. In view of this, the mechanistic approach has gained momentum in the thermal hydraulic community. The Liquid Film Dryout (LFD) in an annular flow is the mechanism of CHF under BWR conditions and the dryout modeling has been found to predict the CHF quite accurately for a tubular geometry. The successful extension of the mechanistic model of dryout to the rod bundle application is vital for the evaluation of critical power in the rod bundle. The present work proposes the uniform film flow approach around the rod by analyzing individual film of the subchannel bounded by rods with different heat fluxes resulting in different film flow rates around a rod and subsequently distributing the varying film flow rates of a rod to arrive at the uniform film flow rate as it has been found that the liquid film has a strong tendency to be uniform around the rod. The FIDOM-Rod code developed for the dryout prediction in BWR assemblies provides detailed solution of the multiple liquid films in a subchannel. The approach of uniform film flow rate around the rod simplifies the liquid film cross flow modeling and was found to provide dryout prediction with a good accuracy when compared with the experimental data of 16, 19 and 37 rod bundles under BWR conditions. The critical power has been predicted for a newly designed 54 rod bundle of the Advanced Heavy Water Reactor (AHWR). The selected constitutive models for the droplet entrainment and deposition rates validated for the dryout in tube were

  8. Critical properties of random Potts models

    Science.gov (United States)

    Kinzel, Wolfgang; Domany, Eytan

    1981-04-01

    The critical properties of Potts models with random bonds are considered in two dimensions. A position-space renormalization-group procedure, based on the Migdal-Kadanoff method, is developed. While all previous position-space calculations satisfied the Harris criterion and the resulting scaling relation only approximately, we found conditions under which these relations are exactly satisfied, and constructed our renormalization-group procedure accordingly. Numerical results for phase diagrams and thermodynamic functions for various random-bond Potts models are presented. In addition, some exact results obtained using a duality transformation, as well as an heuristic derivation of scaling properties that correspond to the percolation problem are given.

  9. Assessment of ASSERT-PV for prediction of critical heat flux in CANDU bundles

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Y.F., E-mail: raoy@aecl.ca; Cheng, Z., E-mail: chengz@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-09-15

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for CHF prediction. • CANDU 28-, 37- and 43-element bundle CHF experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of CHF model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against five full-scale CANDU bundle experiments conducted in 1990s and in 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element (CANFLEX) bundles. A total of 15 CHF test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for CANDU reactors. Code predictions of channel dryout power and axial and radial CHF locations were compared against measurements from the SL CHF tests to quantify the code prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, the sensitivity studies evaluated the contribution of each CHF model change or enhancement to the improvement in CHF prediction. Overall, the assessment demonstrated significant improvement in prediction of channel dryout power and axial and radial CHF locations in horizontal fuel channels containing CANDU bundles.

  10. Modelling critical NDVI curves in perennial ryegrass

    DEFF Research Database (Denmark)

    Gislum, R; Boelt, B

    2010-01-01

      The use of optical sensors to measure canopy reflectance and calculate crop index as e.g. normalized difference vegetation index (NDVI) is widely used in agricultural crops, but has so far not been implemented in herbage seed production. The present study has the purpose to develop a critical...... NDVI curve where the critical NDVI, defined as the minimum NDVI obtained to achieve a high seed yield, will be modelled during the growing season. NDVI measurements were made at different growing degree days (GDD) in a three year field experiment where different N application rates were applied....... There was a clear maximum in the correlation coefficient between seed yield and NDVI in the period from approximately 700 to 900 GDD. At this time there was an exponential relationship between NDVI and seed yield where highest seed yield were at NDVI ~0.9. Theoretically the farmers should aim for an NDVI of 0...

  11. A critical review of {sup 55}Fe dosimetric models

    Energy Technology Data Exchange (ETDEWEB)

    Thind, K.S. [Ontario Hydro Nuclear, Whitby, Ontario (Canada)

    1995-01-01

    The available literature on {sup 55}Fe dosimetry has been devoted to environmental exposures and medical iron kinetic studies. For occupational dosimetry, ICRP published a non-recycling dosimetric model for iron. These ICRP publications do not provide information on iron excretion. Johnson and Dunford published dose conversion factors and urinary excretion curves based on the ICRP and MIRD iron metabolic models. A critical review of these models was undertaken to select a model for occupational dose assignment. The review indicated that the information and recommendations in ICRP and Johnson and Dunford are dependent on unrealistic assumptions that do not agree with known iron metabolism. Therefore, an alternative model is proposed for dosimetric application. Calculations of dose conversion factors and urinary excretion curve for class W{sup 55}Fe inhalation exposure (1 {mu}m AMAD) using the proposed model are compared with predictions based on ICRP and Johnson and Dunford models. The difference in the practical outcome (i.e., dose assignment) is examined by applying the proposed and reviewed models to a realistic bioassay case. The Johnson and Dunford model yields a dose estimate which is roughly a factor of ten higher than values predicted by ICRP and the proposed model. Some of the disagreement is due to uncertainty in the fraction of radio-iron excretion via urine. Further research on this subject is recommended. In the interim, the proposed model is recommended for occupational dose assignment. 31 refs., 6 figs., 9 tabs.

  12. Conformal symmetry of the critical 3D Ising model inside a sphere

    CERN Document Server

    Cosme, Catarina; Penedones, Joao

    2015-01-01

    We perform Monte-Carlo simulations of the three-dimensional Ising model at the critical temperature and zero magnetic field. We simulate the system in a ball with free boundary conditions on the two dimensional spherical boundary. Our results for one and two point functions in this geometry are consistent with the predictions from the conjectured conformal symmetry of the critical Ising model.

  13. Differential scanning calorimetry predicts the critical quality attributes of amorphous glibenclamide.

    Science.gov (United States)

    Mah, Pei T; Laaksonen, Timo; Rades, Thomas; Peltonen, Leena; Strachan, Clare J

    2015-12-01

    Selection of a crystallinity detection tool that is able to predict the critical quality attributes of amorphous formulations is imperative for the development of process control strategies. The main aim of this study was to determine the crystallinity detection tool that best predicts the critical quality attributes (i.e. physical stability and dissolution behaviour) of amorphous material. Glibenclamide (model drug) was milled for various durations using a planetary mill and characterised using Raman spectroscopy and differential scanning calorimetry (DSC). Physical stability studies upon storage at 60°C/0% RH and dissolution studies (non-sink conditions) were performed on the milled glibenclamide samples. Different milling durations were needed to render glibenclamide fully amorphous according to Raman spectroscopy (60min) and onset of crystallisation using DSC (150min). This could be due to the superiority of DSC (onset of crystallisation) in detecting residual crystallinity in the samples milled for between 60 and 120min, which were not detectable with Raman spectroscopy. The physical stability upon storage and dissolution behaviour of the milled samples improved with increased milling duration and plateaus were reached after milling for certain periods of time (physical stability - 150min; dissolution - 120min). The residual crystallinity which was detectable with DSC (onset of crystallisation), but not with Raman spectroscopy, adversely affected the critical quality attributes of milled glibenclamide samples. In addition, mathematical simulations were performed on the dissolution data to determine the solubility advantages of the milled glibenclamide samples and to describe the crystallisation process that occurred during dissolution in pH7.4 phosphate buffer. In conclusion, the onset of crystallisation obtained from DSC measurements best predicts the critical quality attributes of milled glibenclamide samples and mathematical simulations based on the

  14. Prediction of critical illness in elderly outpatients using elder risk assessment: a population-based study

    Directory of Open Access Journals (Sweden)

    Biehl M

    2016-06-01

    receiver operating characteristic curve was 0.75, which indicated good discrimination. Conclusion: A simple model based on easily obtainable administrative data predicted critical illness in the next 2 years in elderly outpatients with up to 14% of the highest risk population suffering from critical illness. This model can facilitate efficient enrollment of patients into clinical programs such as care transition programs and studies aimed at the prevention of critical illness. It also can serve as a reminder to initiate advance care planning for high-risk elderly patients. External validation of this tool in different populations may enhance its generalizability. Keywords: aged, prognostication, critical care, mortality, elder risk assessment

  15. Critical assessment of methods of protein structure prediction (CASP) - round x

    KAUST Repository

    Moult, John

    2013-12-17

    This article is an introduction to the special issue of the journal PROTEINS, dedicated to the tenth Critical Assessment of Structure Prediction (CASP) experiment to assess the state of the art in protein structure modeling. The article describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. The 10 CASP experiments span almost 20 years of progress in the field of protein structure modeling, and there have been enormous advances in methods and model accuracy in that period. Notable in this round is the first sustained improvement of models with refinement methods, using molecular dynamics. For the first time, we tested the ability of modeling methods to make use of sparse experimental three-dimensional contact information, such as may be obtained from new experimental techniques, with encouraging results. On the other hand, new contact prediction methods, though holding considerable promise, have yet to make an impact in CASP testing. The nature of CASP targets has been changing in recent CASPs, reflecting shifts in experimental structural biology, with more irregular structures, more multi-domain and multi-subunit structures, and less standard versions of known folds. When allowance is made for these factors, we continue to see steady progress in the overall accuracy of models, particularly resulting from improvement of non-template regions.

  16. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  17. Causal explanation, intentionality, and prediction: Evaluating the Criticism of "Deductivism"

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2001-01-01

    the question of whether the existence of free will excludes the possibility of prediction of behaviour by scientific or other methods. It is argued that, at least for an example, free will does not necessarily imply that the possibility of prediction of behaviour is ruled out. This section is, however, much...

  18. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  19. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  20. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  1. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  2. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  3. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fried, Laurence E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moss, William C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  4. A state-of-the-art report on two-phase critical flow modelling

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jae Joon; Jang, Won Pyo; Kim, Dong Soo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-09-01

    This report reviews and analyses two-phase, critical flow models. The purposes of the report are (1) to make a knowledge base for the full understanding and best-estimate of two-phase, critical flow, (2) to analyse the model development trend and to derive the direction of further studies. A wide range of critical flow models are reviewed. Each model, in general, predicts critical flow well only within specified conditions. The critical flow models of best-estimate codes are special process model included in the hydrodynamic model. The results of calculations depend on the nodalization, discharge coefficient, and other user`s options. The following topics are recommended for continuing studies: improvement of two-fluid model, development of multidimensional model, data base setup and model error evaluation, and generalization of discharge coefficients. 24 figs., 5 tabs., 80 refs. (Author).

  5. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  6. Prediction of chronic critical illness in a general intensive care unit

    Directory of Open Access Journals (Sweden)

    Sérgio H. Loss

    2013-06-01

    Full Text Available OBJECTIVE: To assess the incidence, costs, and mortality associated with chronic critical illness (CCI, and to identify clinical predictors of CCI in a general intensive care unit. METHODS: This was a prospective observational cohort study. All patients receiving supportive treatment for over 20 days were considered chronically critically ill and eligible for the study. After applying the exclusion criteria, 453 patients were analyzed. RESULTS: There was an 11% incidence of CCI. Total length of hospital stay, costs, and mortality were significantly higher among patients with CCI. Mechanical ventilation, sepsis, Glasgow score < 15, inadequate calorie intake, and higher body mass index were independent predictors for cci in the multivariate logistic regression model. CONCLUSIONS: CCI affects a distinctive population in intensive care units with higher mortality, costs, and prolonged hospitalization. Factors identifiable at the time of admission or during the first week in the intensive care unit can be used to predict CCI.

  7. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  8. Risk prediction of Critical Infrastructures against extreme natural hazards: local and regional scale analysis

    Science.gov (United States)

    Rosato, Vittorio; Hounjet, Micheline; Burzel, Andreas; Di Pietro, Antonio; Tofani, Alberto; Pollino, Maurizio; Giovinazzi, Sonia

    2016-04-01

    Natural hazard events can induce severe impacts on the built environment; they can hit wide and densely populated areas, where there is a large number of (inter)dependent technological systems whose damages could cause the failure or malfunctioning of further different services, spreading the impacts on wider geographical areas. The EU project CIPRNet (Critical Infrastructures Preparedness and Resilience Research Network) is realizing an unprecedented Decision Support System (DSS) which enables to operationally perform risk prediction on Critical Infrastructures (CI) by predicting the occurrence of natural events (from long term weather to short nowcast predictions, correlating intrinsic vulnerabilities of CI elements with the different events' manifestation strengths, and analysing the resulting Damage Scenario. The Damage Scenario is then transformed into an Impact Scenario, where punctual CI element damages are transformed into micro (local area) or meso (regional) scale Services Outages. At the smaller scale, the DSS simulates detailed city models (where CI dependencies are explicitly accounted for) that are of important input for crisis management organizations whereas, at the regional scale by using approximate System-of-Systems model describing systemic interactions, the focus is on raising awareness. The DSS has allowed to develop a novel simulation framework for predicting earthquakes shake maps originating from a given seismic event, considering the shock wave propagation in inhomogeneous media and the subsequent produced damages by estimating building vulnerabilities on the basis of a phenomenological model [1, 2]. Moreover, in presence of areas containing river basins, when abundant precipitations are expected, the DSS solves the hydrodynamic 1D/2D models of the river basins for predicting the flux runoff and the corresponding flood dynamics. This calculation allows the estimation of the Damage Scenario and triggers the evaluation of the Impact Scenario

  9. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  10. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  11. A critical review of lexical analysis and Big Five model

    Directory of Open Access Journals (Sweden)

    María Cristina Richaud de Minzi

    2002-06-01

    Full Text Available In the last years the idea has resurfaced that traits can be measured in a reliable and valid and this can be useful inthe prediction of human behavior. The five-factor model appears to represent a conceptual and empirical advances in the field of personality theory. Necessary orthogonal factors (Goldberg, 1992, p. 26 to show the relationships between the descriptors of the features in English is five, and its nature can be summarized through the broad concepts of Surgency, Agreeableness, Responsibility, Emotional Stability versus neuroticism and openness to experience (John, 1990, p96 Furthermore, despite the criticisms that have been given to the model, represents a breakthrough in the field of personality assessment. This approach means a contribution to the study of personality, without being the integrative model of personality.

  12. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  13. Prediction of heat transfer of nanofluid on critical heat flux based on fractal geometry

    Institute of Scientific and Technical Information of China (English)

    Xiao Bo-Qi

    2013-01-01

    Analytical expressions for nucleate pool boiling heat transfer of nanofluid in the critical heat flux (CHF) region are derived taking into account the effect of nanoparticles moving in liquid based on the fractal geometry theory.The proposed fractal model for the CHF of nanofluid is explicitly related to the average diameter of the nanoparticles,the volumetric nanoparticle concentration,the thermal conductivity of nanoparticles,the fractal dimension of nanoparticles,the fractal dimension of active cavities on the heated surfaces,the temperature,and the properties of the fluid.It is found that the CHF of nanofluid decreases with the increase of the average diameter of nanoparticles.Each parameter of the proposed formulas on CHF has a clear physical meaning.The model predictions are compared with the existing experimental data,and a good agreement between the model predictions and experimental data is found.The validity of the present model is thus verified.The proposed fractal model can reveal the mechanism of heat transfer in nanofluid.

  14. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  15. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  17. Research on Drag Torque Prediction Model for the Wet Clutches

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Considering the surface tension effect and centrifugal effect, a mathematical model based on Reynolds equation for predicting the drag torque of disengage wet clutches is presented. The model indicates that the equivalent radius is a function of clutch speed and flow rate. The drag torque achieves its peak at a critical speed. Above this speed, drag torque drops due to the shrinking of the oil film. The model also points out that viscosity and flow rate effects on drag torque. Experimental results indicate that the model is reasonable and it performs well for predicting the drag torque peak.

  18. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  19. Critical thinking in clinical nurse education: application of Paul's model of critical thinking.

    Science.gov (United States)

    Andrea Sullivan, E

    2012-11-01

    Nurse educators recognize that many nursing students have difficulty in making decisions in clinical practice. The ability to make effective, informed decisions in clinical practice requires that nursing students know and apply the processes of critical thinking. Critical thinking is a skill that develops over time and requires the conscious application of this process. There are a number of models in the nursing literature to assist students in the critical thinking process; however, these models tend to focus solely on decision making in hospital settings and are often complex to actualize. In this paper, Paul's Model of Critical Thinking is examined for its application to nursing education. I will demonstrate how the model can be used by clinical nurse educators to assist students to develop critical thinking skills in all health care settings in a way that makes critical thinking skills accessible to students.

  20. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  1. Critical Behavior of the Widom-Rowlinson Lattice Model

    CERN Document Server

    Dickman, R; Dickman, Ronald; Stell, George

    1995-01-01

    We report extensive Monte Carlo simulations of the Widom-Rowlinson lattice model in two and three dimensions. Our results yield precise values for the critical activities and densities, and clearly place the critical behavior in the Ising universality class.

  2. Models, measurement, and strategies in developing critical-thinking skills.

    Science.gov (United States)

    Brunt, Barbara A

    2005-01-01

    Health care professionals must use critical-thinking skills to solve increasingly complex problems. Educators need to help nurses develop their critical-thinking skills to maintain and enhance their competence. This article reviews various models of critical thinking, as well as methods used to evaluate critical thinking. Specific educational strategies to develop nurses' critical-thinking skills are discussed. Additional research studies are needed to determine how the process of nursing practice can nurture and develop critical-thinking skills, and which strategies are most effective in developing and evaluating critical thinking.

  3. Extended Aging Theories for Predictions of Safe Operational Life of Critical Airborne Structural Components

    Science.gov (United States)

    Ko, William L.; Chen, Tony

    2006-01-01

    The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.

  4. A Model for Teaching Critical Thinking

    Science.gov (United States)

    Emerson, Marnice K.

    2013-01-01

    In an age in which information is available almost instantly and in quantities unimagined just a few decades ago, most educators would agree that teaching adult learners to think critically about what they are reading, seeing, and hearing has never been more important. But just what is critical thinking? Do adult learners agree with educators that…

  5. Predictive maintenance of critical equipment in industrial processes

    Science.gov (United States)

    Hashemian, Hashem M.

    This dissertation is an account of present and past research and development (R&D) efforts conducted by the author to develop and implement new technology for predictive maintenance and equipment condition monitoring in industrial processes. In particular, this dissertation presents the design of an integrated condition-monitoring system that incorporates the results of three current R&D projects with a combined funding of $2.8 million awarded to the author by the U.S. Department of Energy (DOE). This system will improve the state of the art in equipment condition monitoring and has applications in numerous industries including chemical and petrochemical plants, aviation and aerospace, electric power production and distribution, and a variety of manufacturing processes. The work that is presented in this dissertation is unique in that it introduces a new class of condition-monitoring methods that depend predominantly on the normal output of existing process sensors. It also describes current R&D efforts to develop data acquisition systems and data analysis algorithms and software packages that use the output of these sensors to determine the condition and health of industrial processes and their equipment. For example, the output of a pressure sensor in an operating plant can be used not only to indicate the pressure, but also to verify the calibration and response time of the sensor itself and identify anomalies in the process such as blockages, voids, and leaks that can interfere with accurate measurement of process parameters or disturb the plant's operation, safety, or reliability. Today, process data are typically collected at a rate of one sample per second (1 Hz) or slower. If this sampling rate is increased to 100 samples per second or higher, much more information can be extracted from the normal output of a process sensor and then used for condition monitoring, equipment performance measurements, and predictive maintenance. A fast analog-to-digital (A

  6. Analysis and prediction of the critical regions of antimicrobial peptides based on conditional random fields.

    Directory of Open Access Journals (Sweden)

    Kuan Y Chang

    Full Text Available Antimicrobial peptides (AMPs are potent drug candidates against microbes such as bacteria, fungi, parasites, and viruses. The size of AMPs ranges from less than ten to hundreds of amino acids. Often only a few amino acids or the critical regions of antimicrobial proteins matter the functionality. Accurately predicting the AMP critical regions could benefit the experimental designs. However, no extensive analyses have been done specifically on the AMP critical regions and computational modeling on them is either non-existent or settled to other problems. With a focus on the AMP critical regions, we thus develop a computational model AMPcore by introducing a state-of-the-art machine learning method, conditional random fields. We generate a comprehensive dataset of 798 AMPs cores and a low similarity dataset of 510 representative AMP cores. AMPcore could reach a maximal accuracy of 90% and 0.79 Matthew's correlation coefficient on the comprehensive dataset and a maximal accuracy of 83% and 0.66 MCC on the low similarity dataset. Our analyses of AMP cores follow what we know about AMPs: High in glycine and lysine, but low in aspartic acid, glutamic acid, and methionine; the abundance of α-helical structures; the dominance of positive net charges; the peculiarity of amphipathicity. Two amphipathic sequence motifs within the AMP cores, an amphipathic α-helix and an amphipathic π-helix, are revealed. In addition, a short sequence motif at the N-terminal boundary of AMP cores is reported for the first time: arginine at the P(-1 coupling with glycine at the P1 of AMP cores occurs the most, which might link to microbial cell adhesion.

  7. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  8. Predicting intermittent running performance: critical velocity versus endurance index.

    Science.gov (United States)

    Buchheit, M; Laursen, P B; Millet, G P; Pactat, F; Ahmaidi, S

    2008-04-01

    The aim of the present study was to examine the ability of the critical velocity (CV) and the endurance index (EI) to assess endurance performance during intermittent exercise. Thirteen subjects performed two intermittent runs: 15-s runs intersected with 15 s of passive recovery (15/15) and 30-s runs with 30-s rest (30/30). Runs were performed until exhaustion at three intensities (100, 95 and 90 % of the speed reached at the end of the 30 - 15 intermittent fitness test, V (IFT)) to calculate i) CV from the slope of the linear relationship between the total covered distance and exhaustion time (ET) (iCV); ii) anaerobic distance capacity from the Y-intercept of the distance/duration relationship (iADC); and iii) EI from the relationship between the fraction of V (IFT) at which the runs were performed and the log-transformed ET (iEI). Anaerobic capacity was indirectly assessed by the final velocity achieved during the Maximal Anaerobic Running Test (VMART). ET was longer for 15/15 than for 30/30 runs at similar intensities. iCV (15/15) and iCV (30/30) were not influenced by changes in ET and were highly dependent on V (IFT). Neither iADC (15/15) nor iADC (30/30) were related to VMART. In contrast, iEI (15/15) was higher than iEI (30/30), and corresponded with the higher ET. In conclusion, only iEI estimated endurance capacity during repeated intermittent running.

  9. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  10. Criticality in the scale invariant standard model (squared

    Directory of Open Access Journals (Sweden)

    Robert Foot

    2015-07-01

    Full Text Available We consider first the standard model Lagrangian with μh2 Higgs potential term set to zero. We point out that this classically scale invariant theory potentially exhibits radiative electroweak/scale symmetry breaking with very high vacuum expectation value (VEV for the Higgs field, 〈ϕ〉≈1017–18 GeV. Furthermore, if such a vacuum were realized then cancellation of vacuum energy automatically implies that this nontrivial vacuum is degenerate with the trivial unbroken vacuum. Such a theory would therefore be critical with the Higgs self-coupling and its beta function nearly vanishing at the symmetry breaking minimum, λ(μ=〈ϕ〉≈βλ(μ=〈ϕ〉≈0. A phenomenologically viable model that predicts this criticality property arises if we consider two copies of the standard model Lagrangian, with exact Z2 symmetry swapping each ordinary particle with a partner. The spontaneously broken vacuum can then arise where one sector gains the high scale VEV, while the other gains the electroweak scale VEV. The low scale VEV is perturbed away from zero due to a Higgs portal coupling, or via the usual small Higgs mass terms μh2, which softly break the scale invariance. In either case, the cancellation of vacuum energy requires Mt=(171.53±0.42 GeV, which is close to its measured value of (173.34±0.76 GeV.

  11. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    Science.gov (United States)

    Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment. PMID:25635565

  12. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  13. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  14. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  15. Self-Organized Criticality in a Random Network Model

    OpenAIRE

    Nirei, Makoto

    1998-01-01

    A new model of self-organized criticality is defined by incorporating a random network model in order to explain endogenous complex fluctuations of economic aggregates. The model can feature many globally interactive systems such as economies or societies.

  16. Inversed estimation of critical factors for controlling over-prediction of summertime tropospheric O3 over East Asia based of the combination of DDM sensitivity analysis and modeled Green's function method

    Science.gov (United States)

    Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.

    2012-12-01

    Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared

  17. Prediction of Critical Currents for a Diluted Square Lattice Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Sajjad Ali Haider

    2017-03-01

    Full Text Available Studying critical currents, critical temperatures, and critical fields carries substantial importance in the field of superconductivity. In this work, we study critical currents in the current–voltage characteristics of a diluted-square lattice on an Nb film. Our measurements are based on a commercially available Physical Properties Measurement System, which may prove time consuming and costly for repeated measurements for a wide range of parameters. We therefore propose a technique based on artificial neural networks to facilitate extrapolation of these curves for unforeseen values of temperature and magnetic fields. We demonstrate that our proposed algorithm predicts the curves with an immaculate precision and minimal overhead, which may as well be adopted for prediction in other types of regular and diluted lattices. In addition, we present a detailed comparison between three artificial neural networks architectures with respect to their prediction efficiency, computation time, and number of iterations to converge to an optimal solution.

  18. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  19. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  20. Prediction of the critical heat flux for saturated upward flow boiling water in vertical narrow rectangular channels

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Gil Sik, E-mail: choigs@kaist.ac.kr; Chang, Soon Heung; Jeong, Yong Hoon

    2016-07-15

    A study, on the theoretical method to predict the critical heat flux (CHF) of saturated upward flow boiling water in vertical narrow rectangular channels, has been conducted. For the assessment of this CHF prediction method, 608 experimental data were selected from the previous researches, in which the heated sections were uniformly heated from both wide surfaces under the high pressure condition over 41 bar. For this purpose, representative previous liquid film dryout (LFD) models for circular channels were reviewed by using 6058 points from the KAIST CHF data bank. This shows that it is reasonable to define the initial condition of quality and entrainment fraction at onset of annular flow (OAF) as the transition to annular flow regime and the equilibrium value, respectively, and the prediction error of the LFD model is dependent on the accuracy of the constitutive equations of droplet deposition and entrainment. In the modified Levy model, the CHF data are predicted with standard deviation (SD) of 14.0% and root mean square error (RMSE) of 14.1%. Meanwhile, in the present LFD model, which is based on the constitutive equations developed by Okawa et al., the entire data are calculated with SD of 17.1% and RMSE of 17.3%. Because of its qualitative prediction trend and universal calculation convergence, the present model was finally selected as the best LFD model to predict the CHF for narrow rectangular channels. For the assessment of the present LFD model for narrow rectangular channels, effective 284 data were selected. By using the present LFD model, these data are predicted with RMSE of 22.9% with the dryout criterion of zero-liquid film flow, but RMSE of 18.7% with rivulet formation model. This shows that the prediction error of the present LFD model for narrow rectangular channels is similar with that for circular channels.

  1. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  2. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  3. Critical Exponents in Percolation Model of Track Region Critical Exponents in Percolation Model of Track Region Critical Exponents in Percolation Model of Track Region

    Directory of Open Access Journals (Sweden)

    A.B. Demchyshyn

    2012-03-01

    Full Text Available Differences between critical exponents of this model and the continuous percolation model indicate that the dependence of the modified structure area on the dose and the angle related with the correlation between individual tracks. It results in next effect: angular dependence of the surface area of the branched structure has maximum value at certain «critical» angle of ions incidence. Differences between critical exponents of this model and the continuous percolation model indicate that the dependence of the modified structure area on the dose and the angle related with the correlation between individual tracks. It results in next effect: angular dependence of the surface area of the branched structure has maximum value at certain «critical» angle of ions incidence. Differences between critical exponents of this model and the continuous percolation model indicate that the dependence of the modified structure area on the dose and the angle related with the correlation between individual tracks. It results in next effect: angular dependence of the surface area of the branched structure has maximum value at certain «critical» angle of ions incidence.

  4. A Critical Information Literacy Model: Library Leadership within the Curriculum

    Science.gov (United States)

    Swanson, Troy

    2011-01-01

    It is a time for a new model for teaching students to find, evaluate, and use information by drawing on critical pedagogy theory in the education literature. This critical information literacy model views the information world as a dynamic place where authors create knowledge for many reasons; it seeks to understand students as information users,…

  5. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  6. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  7. Role of Personality Traits, Learning Styles and Metacognition in Predicting Critical Thinking of Undergraduate Students

    Directory of Open Access Journals (Sweden)

    Soliemanifar O

    2015-04-01

    The aim of this study was to investigate the role of personality traits, learning styles and metacognition in predicting critical thinking. Instrument & Methods: In this descriptive correlative study, 240 students (130 girls and 110 boys of Ahvaz Shahid Chamran University were selected by multi-stage random sampling method. The instruments for collecting data were NEO Five-Factor Inventory, learning style inventory of Kolb (LSI, metacognitive assessment inventory (MAI of Schraw & Dennison (1994 and California Critical Thinking Skills Test (CCTST. The data were analyzed using Pearson correlation coefficient, stepwise regression analysis and Canonical correlation analysis.  Findings: Openness to experiment (b=0.41, conscientiousness (b=0.28, abstract conceptualization (b=0.39, active experimentation (b=0.22, reflective observation (b=0.12, knowledge of cognition (b=0.47 and regulation of cognition (b=0.29 were effective in predicting critical thinking. Openness to experiment and conscientiousness (r2=0.25, active experimentation, abstract conceptualization and reflective observation learning styles (r2=0.21 and knowledge and regulation of cognition metacognitions (r2=0.3 had an important role in explaining critical thinking. The linear combination of critical thinking skills (evaluation, analysis, inference was predictable by a linear combination of dispositional-cognitive factors (openness, conscientiousness, abstract conceptualization, active experimentation, knowledge of cognition and regulation of cognition. Conclusion: Personality traits, learning styles and metacognition, as dispositional-cognitive factors, play a significant role in students' critical thinking.

  8. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  9. Consideration notes on the critical rainfall threshold to predict the triggering of pyroclastic flows

    Science.gov (United States)

    Scotto di Santolo, A.

    2009-04-01

    This paper reports the results of a theoretical analysis carried out designed to evaluate meteoric events that can be defined as critical since they are capable of triggering landslides in partially saturated pyroclastic soils. The study refers to analyses of the pyroclastic covers in the area of Campania, Italy, which is often affected by complex phenomena that begin as rotational or translational slide or fall and evolve into rapid landslides as earth-flows (debris or mud as function of grain size distributions). The prediction of triggering factors is of extreme importance for the implementation of civic protection schemes, given the dynamic features that characterize these phenomena during their evolution. The study highlights the fact that it is impossible to define the criticality of a meteoric event by means of empiric laws that correlate the mean intensity of rainfall and the "mean" duration of the event. However, it is possible to identify the criticality of a meteoric event in partially saturated soils, by means of a more complex approach which is physically conditioned. The rainfall is critical if it is capable of causing the rainwater to filter into the subsoil into "weak" layers where there is an increase in the specific volume with a significant reduction of the suction and resistance to the shear of the terrain (Fredlund et al., 78). This study focuses exclusively on seepage, regardless of the resistance of the soil, by analyzing, among various aspects, the phenomenon using a simplified subsoil model. For this study, it is assumed that the rainfall is critical when it is capable of saturating the soil cover for a predefined summit thickness Zc. For the purposes of this study, value Zc could be given an arbitrary value. This has been assumed to be 1m, considering that the experimental evidence has shown that rapid flows, at least when triggered, prove to be superficial. The other hypotheses are: • 1D infiltration, • Rigid solid skeleton;

  10. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  3. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  4. A Model for Critical Games Literacy

    Science.gov (United States)

    Apperley, Tom; Beavis, Catherine

    2013-01-01

    This article outlines a model for teaching both computer games and videogames in the classroom for teachers. The model illustrates the connections between in-game actions and youth gaming culture. The article explains how the out-of-school knowledge building, creation and collaboration that occurs in gaming and gaming culture has an impact on…

  5. Is SAPS 3 better than APACHE II at predicting mortality in critically ill transplant patients?

    Directory of Open Access Journals (Sweden)

    Vanessa M. de Oliveira

    2013-01-01

    Full Text Available OBJECTIVES: This study compared the accuracy of the Simplified Acute Physiology Score 3 with that of Acute Physiology and Chronic Health Evaluation II at predicting hospital mortality in patients from a transplant intensive care unit. METHOD: A total of 501 patients were enrolled in the study (152 liver transplants, 271 kidney transplants, 54 lung transplants, 24 kidney-pancreas transplants between May 2006 and January 2007. The Simplified Acute Physiology Score 3 was calculated using the global equation (customized for South America and the Acute Physiology and Chronic Health Evaluation II score; the scores were calculated within 24 hours of admission. A receiver-operating characteristic curve was generated, and the area under the receiver-operating characteristic curve was calculated to identify the patients at the greatest risk of death according to Simplified Acute Physiology Score 3 and Acute Physiology and Chronic Health Evaluation II scores. The Hosmer-Lemeshow goodness-of-fit test was used for statistically significant results and indicated a difference in performance over deciles. The standardized mortality ratio was used to estimate the overall model performance. RESULTS: The ability of both scores to predict hospital mortality was poor in the liver and renal transplant groups and average in the lung transplant group (area under the receiver-operating characteristic curve = 0.696 for Simplified Acute Physiology Score 3 and 0.670 for Acute Physiology and Chronic Health Evaluation II. The calibration of both scores was poor, even after customizing the Simplified Acute Physiology Score 3 score for South America. CONCLUSIONS: The low predictive accuracy of the Simplified Acute Physiology Score 3 and Acute Physiology and Chronic Health Evaluation II scores does not warrant the use of these scores in critically ill transplant patients.

  6. Critical behaviors near the (tri-)critical end point of QCD within the NJL model

    CERN Document Server

    Lu, Ya; Cui, Zhu-Fang; Zong, Hong-Shi

    2015-01-01

    We investigate the dynamical chiral symmetry breaking and its restoration at finite density and temperature within the two-flavor Nambu-Jona-Lasinio model, and mainly focus on the critical behaviors near the critical end point (CEP) and tricritical point (TCP) of QCD. The co-existence region of the Wigner and Nambu phase is determined in the phase diagram for the massive and massless current quark, respectively. We use the various susceptibilities to locate the CEP/TCP and then extract the critical exponents near them. Our calculations reveal that the various susceptibilities share the same critical behaviors for the physical current quark mass, while they show different features in the chiral limit. Furthermore the critical exponent of order parameter at the TCP, $\\beta$=1/4, differs from that on the $O(4)$ line, $\\beta$=1/2, which indicates a change in the universality class.

  7. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  8. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  9. An analytic model with critical behavior in black hole formation

    CERN Document Server

    Koike, T; Koike, Tatsuhiko; Mishima, Takashi

    1995-01-01

    A simple analytic model is presented which exhibits a critical behavior in black hole formation, namely, collapse of a thin shell coupled with outgoing null fluid. It is seen that the critical behavior is caused by the gravitational nonlinearity near the event horizon. We calculate the value of the critical exponent analytically and find that it is very dependent on the coupling constants of the system.

  10. Predicting soil acidification trends at Plynlimon using the SAFE model

    Directory of Open Access Journals (Sweden)

    B. Reynolds

    1997-01-01

    Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.

  11. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  12. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  13. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  14. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  15. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  16. Critical exponents of O(N) models in fractional dimensions

    CERN Document Server

    Codello, A; D'Odorico, G

    2014-01-01

    We compute critical exponents of O(N) models in fractal dimensions between two and four, and for continuos values of the number of field components N, in this way completing the RG classification of universality classes for these models. In d=2 the N-dependence of the correlation length critical exponent gives us the last piece of information needed to establish a RG derivation of the Mermin-Wagner theorem. We also report critical exponents for multi-critical universality classes in the cases N>1 and N=0. Finally, in the large-N limit our critical exponents correctly approach those of the spherical model, allowing us to set N~100 as threshold for the quantitative validity of leading order large-N estimates.

  17. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  18. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  19. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  20. Critical appraisal of clinical prediction rules that aim to optimize treatment selection for musculoskeletal conditions

    NARCIS (Netherlands)

    T.R. Stanton (Tasha); M.J. Hancock (Mark J.); C. Maher (Chris); B.W. Koes (Bart)

    2010-01-01

    textabstractBackground. Clinical prediction rules (CPRs) for treatment selection in musculoskeletal conditions have become increasingly popular. Purpose. The purposes of this review are: (1) to critically appraise studies evaluating CPRs and (2) to consider the clinical utility and stage of developm

  1. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  2. ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox

    Science.gov (United States)

    Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan

    2015-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.

  3. Critical adsorbing properties in slits predicted by tradi-tional polymer adsorption theories on Ising lattice

    Institute of Scientific and Technical Information of China (English)

    LIU Meitang; MU Bozhong

    2005-01-01

    The critical adsorbing properties in slits and three-dimension (3D) phase transitions can be predicted by either Freed theory or Flory-Huggins theory. The mean field approximation in Flory-Huggins theory may cause apparent system errors, from which one can observe two-dimension (2D) phase transitions although it is not true. Monte Carlo simulation has demonstrated that Freed theory is more suitable for predicting adsorbing properties of fluids in slits than Flory-Huggins theory. It was found that from Freed theory prediction multilevel adsorption occurs in slits and the spreading pressure curves exhibit binodal points.

  4. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  5. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  6. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  7. Critical exponents of a three dimensional O(4) spin model

    CERN Document Server

    Kanaya, K; Kanaya, K; Kaya, S

    1995-01-01

    By Monte Carlo simulation we study the critical exponents governing the transition of the three-dimensional classical O(4) Heisenberg model, which is considered to be in the same universality class as the finite-temperature QCD with massless two flavors. We use the single cluster algorithm and the histogram reweighting technique to obtain observables at the critical temperature. After estimating an accurate value of the inverse critical temperature \\Kc=0.9360(1) we make non-perturbative estimates for various critical exponents by finite-size scaling analysis. They are in excellent agreement with those obtained with the 4-\\epsilon expansion method with errors reduced to about halves of them.

  8. Wess-Zumino-Witten model off criticality

    CERN Document Server

    Cabra, D C

    1994-01-01

    We study the renormalization group flow properties of the Wess-Zumino-Witten model in the region of couplings between $g^2=0$ and $g^2=4\\pi/k$, by evaluating the two-loop Zamolodchikov's $c$-function. We also discuss the region of negative couplings.

  9. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  10. Critical state soil constitutive model for methane hydrate soil

    National Research Council Canada - National Science Library

    S. Uchida; K. Soga; K. Yamamoto

    2012-01-01

      This paper presents a new constitutive model that simulates the mechanical behavior of methane hydrate-bearing soil based on the concept of critical state soil mechanics, referred to as the Methane...

  11. A critical appraisal of Markov state models

    Science.gov (United States)

    Schütte, Ch.; Sarich, M.

    2015-09-01

    Markov State Modelling as a concept for a coarse grained description of the essential kinetics of a molecular system in equilibrium has gained a lot of attention recently. The last 10 years have seen an ever increasing publication activity on how to construct Markov State Models (MSMs) for very different molecular systems ranging from peptides to proteins, from RNA to DNA, and via molecular sensors to molecular aggregation. Simultaneously the accompanying theory behind MSM building and approximation quality has been developed well beyond the concepts and ideas used in practical applications. This article reviews the main theoretical results, provides links to crucial new developments, outlines the full power of MSM building today, and discusses the essential limitations still to overcome.

  12. Fractal dimension of critical clusters in the Φ44 model

    Science.gov (United States)

    Jansen, K.; Lang, C. B.

    1991-06-01

    We study the d=4 O(4) symmetric nonlinear sigma model at the pseudocritical points for 84-284 lattices. The Fortuin-Kasteleyn-Coniglio-Klein clusters are shown to have fractal dimension df~=3-in accordance with the conjectured scaling relation involving the odd critical exponent δ. For the one cluster algorithm introduced recently by Wolff the dynamical critical exponent z comes out to be compatible with zero in this model.

  13. Predicting the unpredictable: Critical analysis and practical implications of predictive anticipatory activity

    Directory of Open Access Journals (Sweden)

    Julia eMossbridge

    2014-03-01

    Full Text Available A recent meta-analysis of experiments from seven independent laboratories (n=26 published since 1978 indicates that the human body can apparently detect randomly delivered stimuli occurring 1-10 seconds in the future (Mossbridge, Tressoldi, & Utts, 2012. The key observation in these studies is that human physiology appears to be able to distinguish between unpredictable dichotomous future stimuli, such as emotional vs. neutral images or sound vs. silence. This phenomenon has been called presentiment (as in feeling the future. In this paper we call it predictive anticipatory activity or PAA. The phenomenon is predictive because it can distinguish between upcoming stimuli; it is anticipatory because the physiological changes occur before a future event; and it is an activity because it involves changes in the cardiopulmonary, skin, and/or nervous systems. PAA is an unconscious phenomenon that seems to be a time-reversed reflection of the usual physiological response to a stimulus. It appears to resemble precognition (consciously knowing something is going to happen before it does, but PAA specifically refers to unconscious physiological reactions as opposed to conscious premonitions. Though it is possible that PAA underlies the conscious experience of precognition, experiments testing this idea have not produced clear results. The first part of this paper reviews the evidence for PAA and examines the two most difficult challenges for obtaining valid evidence for it: expectation bias and multiple analyses. The second part speculates on possible mechanisms and the theoretical implications of PAA for understanding physiology and consciousness. The third part examines potential practical applications.

  14. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  15. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  16. Anxiety Interacts With Expressed Emotion Criticism in the Prediction of Psychotic Symptom Exacerbation

    Science.gov (United States)

    Docherty, Nancy M.; St-Hilaire, Annie; Aakre, Jennifer M.; Seghers, James P.; McCleery, Amanda; Divilbiss, Marielle

    2011-01-01

    Psychotic symptoms are exacerbated by social stressors in schizophrenia and schizoaffective disorder patients as a group. More specifically, critical attitudes toward patients on the part of family members and others have been associated with a higher risk of relapse in the patients. Some patients appear to be especially vulnerable in this regard. One variable that could affect the degree of sensitivity to a social stressor such as criticism is the individual’s level of anxiety. The present longitudinal study assessed 27 relatively stable outpatients with schizophrenia or schizoaffective disorder and the single “most influential other” (MIO) person for each patient. As hypothesized, (1) patients with high critical MIOs showed increases in psychotic symptoms over time, compared with patients with low critical MIOs; (2) patients high in anxiety at the baseline assessment showed increases in psychotic symptoms at follow-up, compared with patients low in anxiety, and (3) patients with high levels of anxiety at baseline and high critical MIOs showed the greatest exacerbation of psychotic symptoms over time. Objectively measured levels of criticism were more predictive than patient-rated levels of criticism. PMID:19892819

  17. Critical heat flux prediction for water boiling in vertical tubes of a steam generator

    Energy Technology Data Exchange (ETDEWEB)

    Payan-Rodriguez, L.A.; Gallegos-Munoz [Departmet of Mechanical Engineering, University of Guanajuato, Av. Tampico No. 912 Salamanca (Mexico); Porras-Loaiza, G.L. [Institute for Electrical Researches, Av. Reforma No. 113, Temixco (Mexico); Picon-Nunez [Institute for Scientific Research, University of Guanajuato, Lascurain de Retana No. 5, Guanajuato (Mexico)

    2005-02-01

    This paper presents a methodology for the prediction of the critical heat flux (CHF) for the boiling of water in vertical tubes operating under typical conditions found in steam generators. At the furnace, the water flows through long vertical tubes under an axially non-uniform heat flux and with relatively low mass fluxes. This fact causes that the recent theories and correlations, which have been developed for conditions typically found in nuclear reactors, cannot be directly applied for the prediction of the CHF in the furnace tubes. In this context, the mechanistic theories focused into the CHF prediction have proved their usefulness to predict CHF avoiding the use of correlations and experimental constants. Hence, in order to assist the CHF problem in steam generators, the sublayer dryout theory, initially formulated for CHF in vertical tubes uniformly heated, is extended by combining it with the shape factor method (F-factor), to account for the effects of the axially non-uniform heat flux distribution. The critical wall temperature (CWT) of the tubes is calculated from CHF data. The reliability of the modified theory for the CHF prediction is tested by comparing CWT results against measured data from a steam generator of a power plant. Good consistency and approximation is found between predicted and measured data. (authors)

  18. Reduced functional measure of cardiovascular reserve predicts admission to critical care unit following kidney transplantation.

    Directory of Open Access Journals (Sweden)

    Stephen M S Ting

    Full Text Available BACKGROUND: There is currently no effective preoperative assessment for patients undergoing kidney transplantation that is able to identify those at high perioperative risk requiring admission to critical care unit (CCU. We sought to determine if functional measures of cardiovascular reserve, in particular the anaerobic threshold (VO₂AT could identify these patients. METHODS: Adult patients were assessed within 4 weeks prior to kidney transplantation in a University hospital with a 37-bed CCU, between April 2010 and June 2012. Cardiopulmonary exercise testing (CPET, echocardiography and arterial applanation tonometry were performed. RESULTS: There were 70 participants (age 41.7±14.5 years, 60% male, 91.4% living donor kidney recipients, 23.4% were desensitized. 14 patients (20% required escalation of care from the ward to CCU following transplantation. Reduced anaerobic threshold (VO₂AT was the most significant predictor, independently (OR = 0.43; 95% CI 0.27-0.68; p<0.001 and in the multivariate logistic regression analysis (adjusted OR = 0.26; 95% CI 0.12-0.59; p = 0.001. The area under the receiver-operating-characteristic curve was 0.93, based on a risk prediction model that incorporated VO₂AT, body mass index and desensitization status. Neither echocardiographic nor measures of aortic compliance were significantly associated with CCU admission. CONCLUSIONS: To our knowledge, this is the first prospective observational study to demonstrate the usefulness of CPET as a preoperative risk stratification tool for patients undergoing kidney transplantation. The study suggests that VO₂AT has the potential to predict perioperative morbidity in kidney transplant recipients.

  19. Methods for the prediction of fatigue delamination growth in composites and adhesive bonds: A critical review

    NARCIS (Netherlands)

    Pascoe, J.A.; Alderliesten, R.C.; Benedictus, R.

    2013-01-01

    An overview is given of the development of methods for the prediction of fatigue driven delamination growth over the past 40 years. Four categories of methods are identified: stress/strain-based models, fracture mechanics based models, cohesive-zone models, and models using the extended finite eleme

  20. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  1. Nonextensive critical effects in the Nambu--Jona-Lasinio model

    CERN Document Server

    Rozynek, Jacek

    2009-01-01

    The critical phenomena in strongly interaction matter are generally investigated using the mean-field model and are characterized by well defined critical exponents. However, such models provide only average properties of the corresponding order parameters and neglect altogether their possible fluctuations. Also the possible long range effect are neglected in the mean field approach. Here we investigate the critical behavior in the nonextensive version of the Nambu Jona-Lasinio model (NJL). It allows to account for such effects in a phenomenological way by means of a single parameter $q$, the nonextensivity parameter. In particular, we show how the nonextensive statistics influence the region of the critical temperature and chemical potential in the NJL mean field approach.

  2. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  3. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  4. Using extra systoles to predict fluid responsiveness in cardiothoracic critical care patients.

    Science.gov (United States)

    Vistisen, Simon Tilma

    2017-08-01

    Fluid responsiveness prediction is an unsettled matter for most critical care patients and new methods relying only on the continuous basic monitoring are desired. It was hypothesized that the post-ectopic beat, which is associated with increased preload, could be analyzed in relation to preceding sinus beats and that the change in cardiac performance (e.g. systolic blood pressure) at the post-ectopic beat could predict fluid responsiveness. Cardiothoracic critical care patients scheduled for a 500 ml volume expansion were observed. In the 30 min prior to volume expansion, the ECG was analyzed for occurrence of extra systoles preceded by at least 10 sinus beats. Classification variables, were defined as the change in a variable (e.g. systolic blood pressure or pre-ejection period) from the median of 10 preceding sinus beats to extra systolic post-ectopic beat. A stroke volume increase >15 % following volume expansion defined fluid responsiveness. Thirty patients were included. The change in systolic blood pressure predicted fluid responsiveness in 24 patients correctly with 83 % specificity and 75 % sensitivity (optimal threshold: 5 % systolic blood pressure increase), receiver operating characteristic (ROC) area: 0.81 (CI [0.64;0.98]). The change in pre-ejection period predicted fluid responsiveness in 22 patients correctly with 67 % specificity and 83 % sensitivity (optimal threshold: 19 ms pre-ejection period decrease), ROC area: 0.81 (CI [0.66;0.96]). Pulse pressure variation had ROC area of 0.57 (CI [0.39;0.75]). Based on standard critical care monitoring, analysis of the extra systolic post-ectopic beat predicts fluid responsiveness in cardiothoracic critical care patients with good accuracy.

  5. Critical manifold of the Potts model: exact results and homogeneity approximation.

    Science.gov (United States)

    Wu, F Y; Guo, Wenan

    2012-08-01

    The q-state Potts model has stood at the frontier of research in statistical mechanics for many years. In the absence of a closed-form solution, much of the past effort has focused on locating its critical manifold, trajectory in the parameter (q,e(J)) space where J is the reduced interaction, along which the free energy is singular. However, except in isolated cases, antiferromagnetic (AF) models with JPotts model with AF interactions focusing on obtaining its critical manifold in exact and/or closed-form expressions. We first reexamine the known critical frontiers in light of AF interactions. For the square lattice we confirm the Potts self-dual point to be the sole critical frontier for J>0. We also locate its critical frontier for J0. More generally we consider the centered-triangle (CT) and Union-Jack (UJ) lattices consisting of mixed J and K interactions, and deduce critical manifolds under homogeneity hypotheses. For K = 0 the CT lattice is the diced lattice, and we determine its critical manifold for all J and find q(c) = 3.32472. For K = 0 the UJ lattice is the square lattice and from this we deduce both the J > 0 and J < 0 critical manifolds and q(c) = 3. Our theoretical predictions are compared with recent numerical results.

  6. Predicting human walking gaits with a simple planar model.

    Science.gov (United States)

    Martin, Anne E; Schmiedeler, James P

    2014-04-11

    Models of human walking with moderate complexity have the potential to accurately capture both joint kinematics and whole body energetics, thereby offering more simultaneous information than very simple models and less computational cost than very complex models. This work examines four- and six-link planar biped models with knees and rigid circular feet. The two differ in that the six-link model includes ankle joints. Stable periodic walking gaits are generated for both models using a hybrid zero dynamics-based control approach. To establish a baseline of how well the models can approximate normal human walking, gaits were optimized to match experimental human walking data, ranging in speed from very slow to very fast. The six-link model well matched the experimental step length, speed, and mean absolute power, while the four-link model did not, indicating that ankle work is a critical element in human walking models of this type. Beyond simply matching human data, the six-link model can be used in an optimization framework to predict normal human walking using a torque-squared objective function. The model well predicted experimental step length, joint motions, and mean absolute power over the full range of speeds.

  7. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  8. Soft-Cliff Retreat, Self-Organized Critical Phenomena in the Limit of Predictability?

    Science.gov (United States)

    Paredes, Carlos; Godoy, Clara; Castedo, Ricardo

    2015-03-01

    The coastal erosion along the world's coastlines is a natural process that occurs through the actions of marine and subaerial physico-chemical phenomena, waves, tides, and currents. The development of cliff erosion predictive models is limited due to the complex interactions between environmental processes and material properties over a wide range of temporal and spatial scales. As a result of this erosive action, gravity driven mass movements occur and the coastline moves inland. Like other studied earth natural and synthetically modelled phenomena characterized as self-organized critical (SOC), the recession of the cliff has a seemingly random, sporadic behavior, with a wide range of yearly recession rate values probabilistically distributed by a power-law. Usually, SOC systems are defined by a number of scaling features in the size distribution of its parameters and on its spatial and/or temporal pattern. Particularly, some previous studies of derived parameters from slope movements catalogues, have allowed detecting certain SOC features in this phenomenon, which also shares the recession of cliffs. Due to the complexity of the phenomenon and, as for other natural processes, there is no definitive model of recession of coastal cliffs. In this work, various analysis techniques have been applied to identify SOC features in the distribution and pattern to a particular case: the Holderness shoreline. This coast is a great case study to use when examining coastal processes and the structures associated with them. It is one of World's fastest eroding coastlines (2 m/yr in average, max observed 22 m/yr). Cliffs, ranging from 2 m up to 35 m in height, and made up of glacial tills, mainly compose this coast. It is this soft boulder clay that is being rapidly eroded and where coastline recession measurements have been recorded by the Cliff Erosion Monitoring Program (East Riding of Yorkshire Council, UK). The original database has been filtered by grouping contiguous

  9. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  10. A Simple Predictive Method of Critical Flicker Detection for Human Healthy Precaution

    Directory of Open Access Journals (Sweden)

    Goh Zai Peng

    2015-01-01

    Full Text Available Interharmonics and flickers have an interrelationship between each other. Based on International Electrotechnical Commission (IEC flicker standard, the critical flicker frequency for a human eye is located at 8.8 Hz. Additionally, eye strains, headaches, and in the worst case seizures may happen due to the critical flicker. Therefore, this paper introduces a worthwhile research gap on the investigation of interrelationship between the amplitudes of the interharmonics and the critical flicker for 50 Hz power system. Consequently, the significant findings obtained in this paper are the amplitudes of two particular interharmonics are able to detect the critical flicker. In this paper, the aforementioned amplitudes are detected by adaptive linear neuron (ADALINE. After that, the critical flicker is detected by substituting the aforesaid amplitudes to the formulas that have been generated in this paper accordingly. Simulation and experimental works are conducted and the accuracy of the proposed algorithm which utilizes ADALINE is similar, as compared to typical Fluke power analyzer. In a nutshell, this simple predictive method for critical flicker detection has strong potential to be applied in any human crowded places (such as offices, shopping complexes, and stadiums for human healthy precaution purpose due to its simplicity.

  11. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  12. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  14. Modified Critical State Two-Surface Plasticity Model for Sands

    DEFF Research Database (Denmark)

    Sørensen, Kris Wessel; Nielsen, Søren Kjær; Shajarati, Amir

    This article describes the outline of a numerical integration scheme for a critical state two-surface plasticity model for sands. The model is slightly modified by LeBlanc (2008) compared to the original formulation presented by Manzari and Dafalias (1997) and has the ability to correctly model...... calculations can be performed with the Forward Euler integration scheme. Furthermore, the model is formulated for a single point....

  15. Predicting maximal aerobic capacity (VO2max) from the critical velocity test in female collegiate rowers.

    Science.gov (United States)

    Kendall, Kristina L; Fukuda, David H; Smith, Abbie E; Cramer, Joel T; Stout, Jeffrey R

    2012-03-01

    The objective of this study was to examine the relationship between the critical velocity (CV) test and maximal oxygen consumption (VO2max) and develop a regression equation to predict VO2max based on the CV test in female collegiate rowers. Thirty-five female (mean ± SD; age, 19.38 ± 1.3 years; height, 170.27 ± 6.07 cm; body mass, 69.58 ± 0.3 1 kg) collegiate rowers performed 2 incremental VO2max tests to volitional exhaustion on a Concept II Model D rowing ergometer to determine VO2max. After a 72-hour rest period, each rower completed 4 time trials at varying distances for the determination of CV and anaerobic rowing capacity (ARC). A positive correlation was observed between CV and absolute VO2max (r = 0.775, p < 0.001) and ARC and absolute VO2max (r = 0.414, p = 0.040). Based on the significant correlation analysis, a linear regression equation was developed to predict the absolute VO2max from CV and ARC (absolute VO2max = 1.579[CV] + 0.008[ARC] - 3.838; standard error of the estimate [SEE] = 0.192 L·min(-1)). Cross validation analyses were performed using an independent sample of 10 rowers. There was no significant difference between the mean predicted VO2max (3.02 L·min(-1)) and the observed VO2max (3.10 L·min(-1)). The constant error, SEE and validity coefficient (r) were 0.076 L·min(-1), 0.144 L·min(-1), and 0.72, respectively. The total error value was 0.155 L·min(-1). The positive relationship between CV, ARC, and VO2max suggests that the CV test may be a practical alternative to measuring the maximal oxygen uptake in the absence of a metabolic cart. Additional studies are needed to validate the regression equation using a larger sample size and different populations (junior- and senior-level female rowers) and to determine the accuracy of the equation in tracking changes after a training intervention.

  16. Urinary L-FABP predicts poor outcomes in critically ill patients with early acute kidney injury.

    Science.gov (United States)

    Parr, Sharidan K; Clark, Amanda J; Bian, Aihua; Shintani, Ayumi K; Wickersham, Nancy E; Ware, Lorraine B; Ikizler, T Alp; Siew, Edward D

    2015-03-01

    Biomarker studies for early detection of acute kidney injury (AKI) have been limited by nonselective testing and uncertainties in using small changes in serum creatinine as a reference standard. Here we examine the ability of urine L-type fatty acid-binding protein (L-FABP), neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18 (IL-18), and kidney injury molecule-1 (KIM-1) to predict injury progression, dialysis, or death within 7 days in critically ill adults with early AKI. Of 152 patients with known baseline creatinine examined, 36 experienced the composite outcome. Urine L-FABP demonstrated an area under the receiver-operating characteristic curve (AUC-ROC) of 0.79 (95% confidence interval 0.70-0.86), which improved to 0.82 (95% confidence interval 0.75-0.90) when added to the clinical model (AUC-ROC of 0.74). Urine NGAL, IL-18, and KIM-1 had AUC-ROCs of 0.65, 0.64, and 0.62, respectively, but did not significantly improve discrimination of the clinical model. The category-free net reclassification index improved with urine L-FABP (total net reclassification index for nonevents 31.0%) and urine NGAL (total net reclassification index for events 33.3%). However, only urine L-FABP significantly improved the integrated discrimination index. Thus, modest early changes in serum creatinine can help target biomarker measurement for determining prognosis with urine L-FABP, providing independent and additive prognostic information when combined with clinical predictors.

  17. Critical quasiparticles in single-impurity and lattice Kondo models

    Science.gov (United States)

    Vojta, M.; Bulla, R.; Wölfle, P.

    2015-07-01

    Quantum criticality in systems of local moments interacting with itinerant electrons has become an important and diverse field of research. Here we review recent results which concern (a) quantum phase transitions in single-impurity Kondo and Anderson models and (b) quantum phase transitions in heavy-fermion lattice models which involve critical quasiparticles. For (a) the focus will be on impurity models with a pseudogapped host density of states and their applications, e.g., in graphene and other Dirac materials, while (b) is devoted to strong-coupling behavior near antiferromagnetic quantum phase transitions, with potential applications in a variety of heavy-fermion metals.

  18. Critical behavior in the cubic dimer model at nonzero monomer density

    Science.gov (United States)

    Sreejith, G. J.; Powell, Stephen

    2014-01-01

    We study critical behavior in the classical cubic dimer model (CDM) in the presence of a finite density of monomers. With attractive interactions between parallel dimers, the monomer-free CDM exhibits an unconventional transition from a Coulomb phase to a dimer crystal. Monomers act as charges (or monopoles) in the Coulomb phase and, at nonzero density, lead to a standard Landau-type transition. We use large-scale Monte Carlo simulations to study the system in the neighborhood of the critical point, and find results in agreement with detailed predictions of scaling theory. Going beyond previous studies of the transition in the absence of monomers, we explicitly confirm the distinction between conventional and unconventional criticality, and quantitatively demonstrate the crossover between the two. Our results also provide additional evidence for the theoretical claim that the transition in the CDM belongs in the same universality class as the deconfined quantum critical point in the SU (2) JQ model.

  19. Critical Behaviour of the Gaussian Model on Sierpinski Carpets

    Institute of Scientific and Technical Information of China (English)

    林振权; 孔祥木

    2001-01-01

    The Gaussian model on Sierpinski carpets with two types of nearest neighbour interactions K and Kw and two corresponding types of the Gaussian distribution constants b and bw is constructed by generalizing that on a translationally invariant square lattice. The critical behaviours are studied by the renormalization-group approach and spin rescaling method. They are found to be quite different from that on a translationally invariant square lattice. There are two critical points at (K* = b, K*w = 0) and (K* = 0, K*w = bw), and the correlation length critical exponents are calculated.

  20. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  1. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  2. Critical fluctuations for quantum mean-field models

    Energy Technology Data Exchange (ETDEWEB)

    Fannes, M.; Kossakowski, A.; Verbeure, A. (Univ. Louvain (Belgium))

    1991-11-01

    A Ginzburg-Landau-type approximation is proposed for the local Gibbs states for quantum mean-field models that leads to the exact thermodynamics. Using this approach, the spin fluctuations are computed for some spin-1/2 models. At the critical temperature, the distribution function showing abnormal fluctuations is found explicitly.

  3. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  4. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  5. Prediction of Local Quality of Protein Structure Models Considering Spatial Neighbors in Graphical Models

    Science.gov (United States)

    Shin, Woong-Hee; Kang, Xuejiao; Zhang, Jian; Kihara, Daisuke

    2017-01-01

    Protein tertiary structure prediction methods have matured in recent years. However, some proteins defy accurate prediction due to factors such as inadequate template structures. While existing model quality assessment methods predict global model quality relatively well, there is substantial room for improvement in local quality assessment, i.e. assessment of the error at each residue position in a model. Local quality is a very important information for practical applications of structure models such as interpreting/designing site-directed mutagenesis of proteins. We have developed a novel local quality assessment method for protein tertiary structure models. The method, named Graph-based Model Quality assessment method (GMQ), explicitly considers the predicted quality of spatially neighboring residues using a graph representation of a query protein structure model. GMQ uses conditional random field as its core of the algorithm, and performs a binary prediction of the quality of each residue in a model, indicating if a residue position is likely to be within an error cutoff or not. The accuracy of GMQ was improved by considering larger graphs to include quality information of more surrounding residues. Moreover, we found that using different edge weights in graphs reflecting different secondary structures further improves the accuracy. GMQ showed competitive performance on a benchmark for quality assessment of structure models from the Critical Assessment of Techniques for Protein Structure Prediction (CASP). PMID:28074879

  6. Predicting postoperative acute respiratory failure in critical care using nursing notes and physiological signals.

    Science.gov (United States)

    Huddar, Vijay; Rajan, Vaibhav; Bhattacharya, Sakyajit; Roy, Shourya

    2014-01-01

    Postoperative Acute Respiratory Failure (ARF) is a serious complication in critical care affecting patient morbidity and mortality. In this paper we investigate a novel approach to predicting ARF in critically ill patients. We study the use of two disparate sources of information – semi-structured text contained in nursing notes and investigative reports that are regularly recorded and the respiration rate, a physiological signal that is continuously monitored during a patient's ICU stay. Unlike previous works that retrospectively analyze complications, we exclude discharge summaries from our analysis envisaging a real time system that predicts ARF during the ICU stay. Our experiments, on more than 800 patient records from the MIMIC II database, demonstrate that text sources within the ICU contain strong signals for distinguishing between patients who are at risk for ARF from those who are not at risk. These results suggest that large scale systems using both structured and unstructured data recorded in critical care can be effectively used to predict complications, which in turn can lead to preemptive care with potentially improved outcomes, mortality rates and decreased length of stay and cost.

  7. Introducing a critical dialogical model for vocational teacher education

    Directory of Open Access Journals (Sweden)

    Daniel Alvunger

    2016-02-01

    Full Text Available The purpose with this article is to conceptualise and present what is referred to as a critical dialogical model for vocational teacher education that takes into account the interaction between theory/research and practice/experiential knowledge. The theoretical framework for the model is based on critical hermeneutics and the methodology of dialogue seminars with the aim to promote the development of a 'critical self' among the vocational teacher students. The model enacts an interface between theory and practice where a number of processes are identified: a reflective-analogical process, a critical-analytical process and an interactive critical self-building process. In order to include a theoretical argument concerning the issue of content, the concept of 'learning capital' and its four sub-categories in terms of curricular capital, instructional capital, moral capital and venture capital is used. We point at content-related aspects of student learning and how a critical self has the potential to promote various kinds of 'capital' and capacity building that may be of importance in the future work-life of the vocational teacher student.

  8. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  9. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  10. Non-linear critical taper model and determination of accretionary wedge strength

    Science.gov (United States)

    Yang, Che-Ming; Dong, Jia-Jyun; Hsieh, Yuan-Lung; Liu, Hsueh-Hua; Liu, Cheng-Lung

    2016-12-01

    The critical taper model has been widely used to evaluate the strength contrast between the wedge and the basal detachment of fold-and-thrust belts and accretionary wedges. However, determination of the strength parameters using the traditional critical taper model, which adopts the Mohr-Coulomb failure criterion, is difficult, if not impossible. In this study, we propose a modified critical taper model that incorporates the non-linear Hoek-Brown failure criterion. The parameters in the proposed critical Hoek-Brown wedge CHBW model can be directly evaluated via field investigations and laboratory tests. Meanwhile, the wedge strength is a function of the wedge thickness, which is oriented from stress non-linearity. The fold-and-thrust belt in western central Taiwan was used as an example to validate the proposed model. The determined wedge strength was 0.86 using a representative wedge thickness of 5.3 km; this was close to the inferred value of 0.6 from the critical taper. Interestingly, a concave topographic relief is predicted as a result of the wedge thickness dependency of the wedge strength, even if the wedge is composed of homogeneous materials and if the strength of the detachment is uniform. This study demonstrates that the influence of wedge strength on the critical taper angle can be quantified by the spatial distribution of strength variables and by the consideration of the wedge thickness dependency of wedge strength.

  11. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  12. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  13. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  14. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  15. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  16. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  17. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  18. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  19. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  20. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  1. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  2. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  3. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  4. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  5. Spatial modeling of coupled hydrologic-biogeochemical processes for the Southern Sierra Critical Zone Observatory

    Science.gov (United States)

    Tague, C.

    2007-12-01

    One of the primary roles of modeling in critical zone research studies is to provide a framework for integrating field measurements and theory and for generalizing results across space and time. In the Southern Sierra Critical Zone Observatory (SCZO), significant spatial heterogeneity associated with mountainous terrain combined with high inter-annual and seasonal variation in climate, necessitates the use of spatial-temporal models for generating landscape scale understanding and predictions. Science questions related to coupled hydrologic and biogeochemical fluxes within the critical zone require a framework that can account for multiple and interacting processes. One of the core tools for the SCZO will be RHESSYs (Regional hydro-ecologic simulation system). RHESSys is an existing GIS-based model of hydrology and biogeochemical cycling. For the SCZO, we use RHESSys as an open-source, objected oriented model that can be extended to incorporate findings from field-based monitoring and analysis. We use the model as a framework for data assimilation, spatial-temporal interpolation, prediction, and scenario and hypothesis generation. Here we demonstrate the use of RHESSys as a hypothesis generation tool. We show how initial RHESSys predictions can be used to estimate when and where connectivity within the critical zone will lead to significant spatial or temporal gradients in vegetation carbon and moisture fluxes. We use the model to explore the potential implications of heterogeneity in critical zone controls on hydrologic processes at two scales: micro and macro. At the micro scale, we examine the role of preferential flowpaths. At the macro scale we consider the importance of upland-riparian zone connectivity. We show how the model can be used to design efficient field experiments by, a-priori providing quantitative estimate of uncertainty and highlighting when and where measurements might most effectively reduce that uncertainty.

  6. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common...... organizational goals, and as such business planning and resource assignments should strive to satisfy higher organizational goals. It is critical to understand how different decisions affect and influence one another. Here, a business planning example is presented where systems thinking technique, using Causal...

  7. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  8. Predictive models of procedural human supervisory control behavior

    Science.gov (United States)

    Boussemart, Yves

    Human supervisory control systems are characterized by the computer-mediated nature of the interactions between one or more operators and a given task. Nuclear power plants, air traffic management and unmanned vehicles operations are examples of such systems. In this context, the role of the operators is typically highly proceduralized due to the time and mission-critical nature of the tasks. Therefore, the ability to continuously monitor operator behavior so as to detect and predict anomalous situations is a critical safeguard for proper system operation. In particular, such models can help support the decision J]l8king process of a supervisor of a team of operators by providing alerts when likely anomalous behaviors are detected By exploiting the operator behavioral patterns which are typically reinforced through standard operating procedures, this thesis proposes a methodology that uses statistical learning techniques in order to detect and predict anomalous operator conditions. More specifically, the proposed methodology relies on hidden Markov models (HMMs) and hidden semi-Markov models (HSMMs) to generate predictive models of unmanned vehicle systems operators. Through the exploration of the resulting HMMs in two distinct single operator scenarios, the methodology presented in this thesis is validated and shown to provide models capable of reliably predicting operator behavior. In addition, the use of HSMMs on the same data scenarios provides the temporal component of the predictions missing from the HMMs. The final step of this work is to examine how the proposed methodology scales to more complex scenarios involving teams of operators. Adopting a holistic team modeling approach, both HMMs and HSMMs are learned based on two team-based data sets. The results show that the HSMMs can provide valuable timing information in the single operator case, whereas HMMs tend to be more robust to increased team complexity. In addition, this thesis discusses the

  9. Critical slowing down exponents in structural glasses: Random orthogonal and related models

    Science.gov (United States)

    Caltagirone, F.; Ferrari, U.; Leuzzi, L.; Parisi, G.; Rizzo, T.

    2012-08-01

    An important prediction of mode-coupling theory is the relationship between the power-law decay exponents in the β regime and the consequent definition of the so-called exponent parameter λ. In the context of a certain class of mean-field glass models with quenched disorder, the physical meaning of λ has recently been understood, yielding a method to compute it exactly in a static framework. In this paper we exploit this new technique to compute the critical slowing down exponents for such models including, as special cases, the Sherrington-Kirkpatrick model, the p-spin model, and the random orthogonal model.

  10. Procalcitonin Clearance for Early Prediction of Survival in Critically Ill Patients with Severe Sepsis

    Directory of Open Access Journals (Sweden)

    Mohd Basri Mat Nor

    2014-01-01

    Full Text Available Introduction. Serum procalcitonin (PCT diagnosed sepsis in critically ill patients; however, its prediction for survival is not well established. We evaluated the prognostic value of dynamic changes of PCT in sepsis patients. Methods. A prospective observational study was conducted in adult ICU. Patients with systemic inflammatory response syndrome (SIRS were recruited. Daily PCT were measured for 3 days. 48 h PCT clearance (PCTc-48 was defined as percentage of baseline PCT minus 48 h PCT over baseline PCT. Results. 95 SIRS patients were enrolled (67 sepsis and 28 noninfectious SIRS. 40% patients in the sepsis group died in hospital. Day 1-PCT was associated with diagnosis of sepsis (AUC 0.65 (95% CI, 0.55 to 0.76 but was not predictive of mortality. In sepsis patients, PCTc-48 was associated with prediction of survival (AUC 0.69 (95% CI, 0.53 to 0.84. Patients with PCTc-48 > 30% were independently associated with survival (HR 2.90 (95% CI 1.22 to 6.90. Conclusions. PCTc-48 is associated with prediction of survival in critically ill patients with sepsis. This could assist clinicians in risk stratification; however, the small sample size, and a single-centre study, may limit the generalisability of the finding. This would benefit from replication in future multicentre study.

  11. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  12. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  13. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  14. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  15. Nonequilibrium critical dynamics of two dimensional interacting monomer-dimer model: non-Ising criticality

    Science.gov (United States)

    Nam, Keekwon; Kim, Bongsoo; Jong Lee, Sung

    2014-08-01

    We investigate the nonequilibrium relaxation dynamics of an interacting monomer-dimer model with nearest neighbor repulsion on a square lattice, which possesses two symmetric absorbing states. The model is known to exhibit two nearby continuous transitions: the Z2 symmetry-breaking order-disorder transition and the absorbing transition with directed percolation criticality. We performed a more detailed analysis of our extensive simulations on bigger lattice systems which reaffirms that the symmetry-breaking transition exhibits a non-Ising critical behavior with β ≃ 0.149(2) and η ≃ 0.30(1) that are distinct from those values of a pure two dimensional Ising model. Finite size scaling of dimer density near the symmetry breaking transition gives logarithmic scaling (α = 0.0) which is consistent with the hyperscaling relation but the corresponding exponent of νB ≃ 1.37(2) exhibits a conspicuous deviation from the pure Ising value of 1. The value of dynamic critical exponent z, however, is found to be close to that of the kinetic Ising model as 1/z ≃ 0.466(5) from the relaxation of staggered magnetization (and also similar but slightly smaller values from coarsening).

  16. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  17. Critical-like behavior in a lattice gas model

    CERN Document Server

    Wieloch, A; Lukasik, J; Pawlowski, P; Pietrzak, T; Trautmann, W

    2010-01-01

    ALADIN multifragmentation data show features characteristic of a critical behavior, which are very well reproduced by a bond percolation model. This suggests, in the context of the lattice gas model, that fragments are formed at nearly normal nuclear densities and temperatures corresponding to the Kertesz line. Calculations performed with a lattice gas model have shown that similarly good reproduction of the data can also be achieved at lower densities, particularly in the liquid-gas coexistence region.

  18. `Dhara': An Open Framework for Critical Zone Modeling

    Science.gov (United States)

    Le, P. V.; Kumar, P.

    2016-12-01

    Processes in the Critical Zone, which sustain terrestrial life, are tightly coupled across hydrological, physical, biological, chemical, pedological, geomorphological and ecological domains over both short and long timescales. Observations and quantification of the Earth's surface across these domains using emerging high resolution measurement technologies such as light detection and ranging (lidar) and hyperspectral remote sensing are enabling us to characterize fine scale landscape attributes over large spatial areas. This presents a unique opportunity to develop novel approaches to model the Critical Zone that can capture fine scale intricate dependencies across the different processes in 3D. The development of interdisciplinary tools that transcend individual disciplines and capture new levels of complexity and emergent properties is at the core of Critical Zone science. Here we introduce an open framework for high-performance computing model (`Dhara') for modeling complex processes in the Critical Zone. The framework is designed to be modular in structure with the aim to create uniform and efficient tools to facilitate and leverage process modeling. It also provides flexibility to maintain, collaborate, and co-develop additional components by the scientific community. We show the essential framework that simulates ecohydrologic dynamics, and surface - sub-surface coupling in 3D using hybrid parallel CPU-GPU. We demonstrate that the open framework in Dhara is feasible for detailed, multi-processes, and large-scale modeling of the Critical Zone, which opens up exciting possibilities. We will also present outcomes from a Modeling Summer Institute led by Intensively Managed Critical Zone Observatory (IMLCZO) with representation from several CZOs and international representatives.

  19. OBESITY AND CRITICAL ILLNESS: INSIGHTS FROM ANIMAL MODELS.

    Science.gov (United States)

    Mittwede, Peter N; Clemmer, John S; Bergin, Patrick F; Xiang, Lusha

    2016-04-01

    Critical illness is a major cause of morbidity and mortality around the world. While obesity is often detrimental in the context of trauma, it is paradoxically associated with improved outcomes in some septic patients. The reasons for these disparate outcomes are not well understood. A number of animal models have been used to study the obese response to various forms of critical illness. Just as there have been many animal models that have attempted to mimic clinical conditions, there are many clinical scenarios that can occur in the highly heterogeneous critically ill patient population that occupies hospitals and intensive care units. This poses a formidable challenge for clinicians and researchers attempting to understand the mechanisms of disease and develop appropriate therapies and treatment algorithms for specific subsets of patients, including the obese. The development of new, and the modification of existing animal models, is important in order to bring effective treatments to a wide range of patients. Not only do experimental variables need to be matched as closely as possible to clinical scenarios, but animal models with pre-existing comorbid conditions need to be studied. This review briefly summarizes animal models of hemorrhage, blunt trauma, traumatic brain injury, and sepsis. It also discusses what has been learned through the use of obese models to study the pathophysiology of critical illness in light of what has been demonstrated in the clinical literature.

  20. Health care policy development: a critical analysis model.

    Science.gov (United States)

    Logan, Jean E; Pauling, Carolyn D; Franzen, Debra B

    2011-01-01

    This article describes a phased approach for teaching baccalaureate nursing students critical analysis of health care policy, including refinement of existing policy or the foundation to create new policy. Central to this approach is the application of an innovative framework, the Grand View Critical Analysis Model, which was designed to provide a conceptual base for the authentic learning experience. Students come to know the interconnectedness and the importance of the model, which includes issue selection and four phases: policy focus, colleagueship analysis, evidence-based practice analysis, and policy analysis and development.

  1. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  2. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  3. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  4. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  5. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  7. Hypomagnesemia in critically ill cancer patients: a prospective study of predictive factors

    Directory of Open Access Journals (Sweden)

    Deheinzelin D.

    2000-01-01

    Full Text Available Hypomagnesemia is the most common electrolyte disturbance seen upon admission to the intensive care unit (ICU. Reliable predictors of its occurrence are not described. The objective of this prospective study was to determine factors predictive of hypomagnesemia upon admission to the ICU. In a single tertiary cancer center, 226 patients with different diagnoses upon entering were studied. Hypomagnesemia was defined by serum levels <1.5 mg/dl. Demographic data, type of cancer, cause of admission, previous history of arrhythmia, cardiovascular disease, renal failure, drug administration (particularly diuretics, antiarrhythmics, chemotherapy and platinum compounds, previous nutrition intake and presence of hypovolemia were recorded for each patient. Blood was collected for determination of serum magnesium, potassium, sodium, calcium, phosphorus, blood urea nitrogen and creatinine levels. Upon admission, 103 (45.6% patients had hypomagnesemia and 123 (54.4% had normomagnesemia. A normal dietary habit prior to ICU admission was associated with normal Mg levels (P = 0.007 and higher average levels of serum Mg (P = 0.002. Postoperative patients (N = 182 had lower levels of serum Mg (0.60 ± 0.14 mmol/l compared with 0.66 ± 0.17 mmol/l, P = 0.006. A stepwise multiple linear regression disclosed that only normal dietary habits (OR = 0.45; CI = 0.26-0.79 and the fact of being a postoperative patient (OR = 2.42; CI = 1.17-4.98 were significantly correlated with serum Mg levels (overall model probability = 0.001. These findings should be used to identify patients at risk for such disturbance, even in other critically ill populations.

  8. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  9. Dynamical critical behavior of the Ziff-Gulari-Barshad model with quenched impurities

    Science.gov (United States)

    de Andrade, M. F.; Figueiredo, W.

    2016-08-01

    The simplest model to explain the CO oxidation in some catalytic processes is the Ziff-Gulari-Barshad (ZGB) model. It predicts a continuous phase transition between an active phase and an absorbing phase composed of O atoms. By employing Monte Carlo simulations we investigate the dynamical critical behavior of the model as a function of the concentration of fixed impurities over the catalytic surface. By means of an epidemic analysis we calculate the critical exponents related to the survival probability Ps (t), the number of empty sites nv (t), and the mean square displacement R2 (t). We show that the critical exponents depend on the concentration of impurities over the lattice, even for small values of this quantity. We also show that the exponents do not belong to the Directed Percolation universality class and are in agreement with the Harris criterion since the quenched impurities behave as a weak disorder in the system.

  10. A Monte-Carlo study for the critical exponents of the three-dimensional O(6) model

    Science.gov (United States)

    Loison, D.

    1999-09-01

    Using Wolff's single-cluster Monte-Carlo update algorithm, the three-dimensional O(6)-Heisenberg model on a simple cubic lattice is simulated. With the help of finite size scaling we compute the critical exponents ν, β, γ and η. Our results agree with the field-theory predictions but not so well with the prediction of the series expansions.

  11. Modeling and Prediction of Hot Deformation Flow Curves

    Science.gov (United States)

    Mirzadeh, Hamed; Cabrera, Jose Maria; Najafizadeh, Abbas

    2012-01-01

    The modeling of hot flow stress and prediction of flow curves for unseen deformation conditions are important in metal-forming processes because any feasible mathematical simulation needs accurate flow description. In the current work, in an attempt to summarize, generalize, and introduce efficient methods, the dynamic recrystallization (DRX) flow curves of a 17-4 PH martensitic precipitation hardening stainless steel, a medium carbon microalloyed steel, and a 304 H austenitic stainless steel were modeled and predicted using (1) a hyperbolic sine equation with strain dependent constants, (2) a developed constitutive equation in a simple normalized stress-normalized strain form and its modified version, and (3) a feed-forward artificial neural network (ANN). These methods were critically discussed, and the ANN technique was found to be the best for the modeling available flow curves; however, the developed constitutive equation showed slightly better performance than that of ANN and significantly better predicted values than those of the hyperbolic sine equation in prediction of flow curves for unseen deformation conditions.

  12. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  13. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  14. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  15. Sharp critical behavior for pinning model in random correlated environment

    CERN Document Server

    Berger, Quentin

    2011-01-01

    This article investigates the effect for random pinning models of long range power-law decaying correlations in the environment. For a particular type of environment based on a renewal construction, we are able to sharply describe the phase transition from the delocalized phase to the localized one, giving the critical exponent for the (quenched) free-energy, and proving that at the critical point the trajectories are fully delocalized. These results contrast with what happens both for the pure model (i.e. without disorder) and for the widely studied case of i.i.d. disorder, where the relevance or irrelevance of disorder on the critical properties is decided via the so-called Harris Criterion.

  16. Prediction models in in vitro fertilization; where are we? A mini review

    Directory of Open Access Journals (Sweden)

    Laura van Loendersloot

    2014-05-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in 1978, over five million babies have been born worldwide using IVF. Contrary to the perception of many, IVF does not guarantee success. Almost 50% of couples that start IVF will remain childless, even if they undergo multiple IVF cycles. The decision to start or pursue with IVF is challenging due to the high cost, the burden of the treatment, and the uncertain outcome. In optimal counseling on chances of a pregnancy with IVF, prediction models may play a role, since doctors are not able to correctly predict pregnancy chances. There are three phases of prediction model development: model derivation, model validation, and impact analysis. This review provides an overview on predictive factors in IVF, the available prediction models in IVF and provides key principles that can be used to critically appraise the literature on prediction models in IVF. We will address these points by the three phases of model development.

  17. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  18. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  19. An Energy-Critical Plane Based Fatigue Damage Approach for the Life Prediction of Metal Alloys

    Science.gov (United States)

    Pitatzis, N.; Savaidis, G.

    2016-11-01

    This paper presents a new energy-critical plane based fatigue damage approach for the assessment of the fatigue life under uniaxial and multiaxial proportional and non-proportional fatigue loading. The proposed approximate method, based on Farahani's multiaxial fatigue damage model, takes into account the critical plane orientations during a loading cycle and the values of the respective damage parameters on them. The uniqueness of the proposed method lies on the fact that it considers a weighted contribution of each critical plane orientation to the material damage. The relative weighting factors depend on the declination of each critical plane with respect to the critical plane, where the damage parameters exhibit their maximum values during a fatigue loading cycle. Herein, several low, mid and high-cycle fatigue loading cases are being investigated. The induced elastic-plastic stress-strain states are approximated by means of respective finite element analyses (FEA). Several experimental fatigue data derived from uniaxial and multiaxial fatigue tests on StE460 steel alloy thin-walled hourglass-type specimens have been used to verify the model's calculation accuracy. Comparison of experimental and calculated fatigue lives confirm remarkable fatigue life calculation accuracy in all cases examined.

  20. The Impact of Macro-and Micronutrients on Predicting Outcomes of Critically Ill Patients Requiring Continuous Renal Replacement Therapy.

    Directory of Open Access Journals (Sweden)

    Kittrawee Kritmetapak

    Full Text Available Critically ill patients with acute kidney injury (AKI who receive renal replacement therapy (RRT have very high mortality rate. During RRT, there are markedly loss of macro- and micronutrients which may cause malnutrition and result in impaired renal recovery and patient survival. We aimed to examine the predictive role of macro- and micronutrients on survival and renal outcomes in critically ill patients undergoing continuous RRT (CRRT. This prospective observational study enrolled critically ill patients requiring CRRT at Intensive Care Unit of King Chulalongkorn Memorial Hospital from November 2012 until November 2013. The serum, urine, and effluent fluid were serially collected on the first three days to calculate protein metabolism including dietary protein intake (DPI, nitrogen balance, and normalized protein catabolic rate (nPCR. Serum zinc, selenium, and copper were measured for micronutrients analysis on the first three days of CRRT. Survivor was defined as being alive on day 28 after initiation of CRRT.Dialysis status on day 28 was also determined. Of the 70 critically ill patients requiring CRRT, 27 patients (37.5% survived on day 28. The DPI and serum albumin of survivors were significantly higher than non-survivors (0.8± 0.2 vs 0.5 ±0.3g/kg/day, p = 0.001, and 3.2±0.5 vs 2.9±0.5 g/dL, p = 0.03, respectively while other markers were comparable. The DPI alone predicted patient survival with area under the curve (AUC of 0.69. A combined clinical model predicted survival with AUC of 0.78. When adjusted for differences in albumin level, clinical severity score (APACHEII and SOFA score, and serum creatinine at initiation of CRRT, DPI still independently predicted survival (odds ratio 4.62, p = 0.009. The serum levels of micronutrients in both groups were comparable and unaltered following CRRT. Regarding renal outcome, patients in the dialysis independent group had higher serum albumin levels than the dialysis dependent group, p = 0

  1. Should we believe model predictions of future climate change? (Invited)

    Science.gov (United States)

    Knutti, R.

    2009-12-01

    for an effect to be real, but some features of the current models are perfectly robust yet known to be wrong. How much we can actually learn from more models of the same type is therefore an open question. A case is made here that the community must think harder on how to quantify the uncertainty and skill of their models, that making the models ever more complicated and expensive to run is unlikely to reduce uncertainties in predictions unless new data is used to constrain and calibrate the models, and that the demand for predictions and the data produced by the models is likely to quickly outgrow our capacity to understand the model and to analyze the results. More quantitative methods to quantify model performance are therefore critical to maximize the value of climate change projections from global climate models.

  2. Behavior of Early Warnings near the Critical Temperature in the Two-Dimensional Ising Model

    Science.gov (United States)

    Morales, Irving O.; Landa, Emmanuel; Angeles, Carlos Calderon; Toledo, Juan C.; Rivera, Ana Leonor; Temis, Joel Mendoza; Frank, Alejandro

    2015-01-01

    Among the properties that are common to complex systems, the presence of critical thresholds in the dynamics of the system is one of the most important. Recently, there has been interest in the universalities that occur in the behavior of systems near critical points. These universal properties make it possible to estimate how far a system is from a critical threshold. Several early-warning signals have been reported in time series representing systems near catastrophic shifts. The proper understanding of these early-warnings may allow the prediction and perhaps control of these dramatic shifts in a wide variety of systems. In this paper we analyze this universal behavior for a system that is a paradigm of phase transitions, the Ising model. We study the behavior of the early-warning signals and the way the temporal correlations of the system increase when the system is near the critical point. PMID:26103513

  3. Behavior of Early Warnings near the Critical Temperature in the Two-Dimensional Ising Model.

    Directory of Open Access Journals (Sweden)

    Irving O Morales

    Full Text Available Among the properties that are common to complex systems, the presence of critical thresholds in the dynamics of the system is one of the most important. Recently, there has been interest in the universalities that occur in the behavior of systems near critical points. These universal properties make it possible to estimate how far a system is from a critical threshold. Several early-warning signals have been reported in time series representing systems near catastrophic shifts. The proper understanding of these early-warnings may allow the prediction and perhaps control of these dramatic shifts in a wide variety of systems. In this paper we analyze this universal behavior for a system that is a paradigm of phase transitions, the Ising model. We study the behavior of the early-warning signals and the way the temporal correlations of the system increase when the system is near the critical point.

  4. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  5. An improved mechanistic critical heat flux model for subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Based on the bubble coalescence adjacent to the heated wall as a flow structure for CHF condition, Chang and Lee developed a mechanistic critical heat flux (CHF) model for subcooled flow boiling. In this paper, improvements of Chang-Lee model are implemented with more solid theoretical bases for subcooled and low-quality flow boiling in tubes. Nedderman-Shearer`s equations for the skin friction factor and universal velocity profile models are employed. Slip effect of movable bubbly layer is implemented to improve the predictability of low mass flow. Also, mechanistic subcooled flow boiling model is used to predict the flow quality and void fraction. The performance of the present model is verified using the KAIST CHF database of water in uniformly heated tubes. It is found that the present model can give a satisfactory agreement with experimental data within less than 9% RMS error. 9 refs., 5 figs. (Author)

  6. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  7. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  8. Evolutionary Population Synthesis Models of Primeval Galaxies a Critical Appraisal

    CERN Document Server

    Buzzoni, A

    1997-01-01

    A theoretical approach relying on evolutionary population synthesis models could help refining the search criteria in deep galaxy surveys on the basis of a better knowledge of the expected apparent photometric properties of high-redshift objects. The following is a brief discussion reviewing some relevant aspects of the question in order to allow a more critical appraisal to primeval galaxy recognition.

  9. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  10. Critical review of wind tunnel modeling of atmospheric heat dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Orgill, M.M.

    1977-05-01

    There is increasing concern by scientists that future proposed energy or power parks may significantly affect the environment by releasing large quantities of heat and water vapor to the atmosphere. A critical review is presented of the potential application of physical modeling (wind tunnels) to assess possible atmospheric effects from heat dissipation systems such as cooling towers. A short inventory of low-speed wind tunnel facilities is included in the review. The useful roles of wind tunnels are assessed and the state-of-the-art of physical modeling is briefly reviewed. Similarity criteria are summarized and present limitations in satisfying these criteria are considered. Current physical models are defined and limitations are discussed. Three experimental problems are discussed in which physical modeling may be able to provide data. These are: defining the critical atmospheric heat load; topographic and local circulation effects on thermal plumes; and plume rise and downstream effects.

  11. Critical analysis of the successes and failures of homology models of G protein-coupled receptors.

    Science.gov (United States)

    Bhattacharya, Supriyo; Lam, Alfonso Ramon; Li, Hubert; Balaraman, Gouthaman; Niesen, Michiel Jacobus Maria; Vaidehi, Nagarajan

    2013-05-01

    We present a critical assessment of the performance of our homology model refinement method for G protein-coupled receptors (GPCRs), called LITICon that led to top ranking structures in a recent structure prediction assessment GPCRDOCK2010. GPCRs form the largest class of drug targets for which only a few crystal structures are currently available. Therefore, accurate homology models are essential for drug design in these receptors. We submitted five models each for human chemokine CXCR4 (bound to small molecule IT1t and peptide CVX15) and dopamine D3DR (bound to small molecule eticlopride) before the crystal structures were published. Our models in both CXCR4/IT1t and D3/eticlopride assessments were ranked first and second, respectively, by ligand RMSD to the crystal structures. For both receptors, we developed two types of protein models: homology models based on known GPCR crystal structures, and ab initio models based on the prediction method MembStruk. The homology-based models compared better to the crystal structures than the ab initio models. However, a robust refinement procedure for obtaining high accuracy structures is needed. We demonstrate that optimization of the helical tilt, rotation, and translation is vital for GPCR homology model refinement. As a proof of concept, our in-house refinement program LITiCon captured the distinct orientation of TM2 in CXCR4, which differs from that of adrenoreceptors. These findings would be critical for refining GPCR homology models in future. Copyright © 2012 Wiley Periodicals, Inc.

  12. Distributed Measuring System for Predictive Diagnosis of Uninterruptible Power Supplies in Safety-Critical Applications

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2016-04-01

    Full Text Available This work proposes a scalable architecture of an Uninterruptible Power Supply (UPS system, with predictive diagnosis capabilities, for safety critical applications. A Failure Mode and Effect Analysis (FMEA has identified the faults occurring in the energy storage unit, based on Valve Regulated Lead-Acid batteries, and in the 3-phase high power transformers, used in switching converters and for power isolation, as the main bottlenecks for power system reliability. To address these issues, a distributed network of measuring nodes is proposed, where vibration-based mechanical stress diagnosis is implemented together with electrical (voltage, current, impedance and thermal degradation analysis. Power system degradation is tracked through multi-channel measuring nodes with integrated digital signal processing in the transformed frequency domain, from 0.1 Hz to 1 kHz. Experimental measurements on real power systems for safety-critical applications validate the diagnostic unit.

  13. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  14. Prediction of the critical buckling load of multi-walled carbon nanotubes under axial compression

    Science.gov (United States)

    Timesli, Abdelaziz; Braikat, Bouazza; Jamal, Mohammad; Damil, Noureddine

    2017-02-01

    In this paper, we propose a new explicit analytical formula of the critical buckling load of double-walled carbon nanotubes (DWCNT) under axial compression. This formula takes into account van der Waals interactions between adjacent tubes and the effect of terms involving tube radii differences generally neglected in the derived expressions of the critical buckling load published in the literature. The elastic multiple Donnell shells continuum approach is employed for modelling the multi-walled carbon nanotubes. The validation of the proposed formula is made by comparison with a numerical solution. The influence of the neglected terms is also studied.

  15. Critical domain-wall dynamics of model B.

    Science.gov (United States)

    Dong, R H; Zheng, B; Zhou, N J

    2009-05-01

    With Monte Carlo methods, we simulate the critical domain-wall dynamics of model B, taking the two-dimensional Ising model as an example. In the macroscopic short-time regime, a dynamic scaling form is revealed. Due to the existence of the quasirandom walkers, the magnetization shows intrinsic dependence on the lattice size L . An exponent which governs the L dependence of the magnetization is measured to be sigma=0.243(8) .

  16. Critical phenomena of nuclear matter in the extended Zimanyi-Moszkowski model

    CERN Document Server

    Miyazaki, K

    2005-01-01

    We have studied the thermodynamics of warm nuclear matter below the saturation density in the extended Zimanyi-Moszkowski model. The EOS behaves like van der Waals one and shows the liquid-gas phase transition as the other microscopic EOSs. It predicts the critical temperature T_{C}=16.36MeV that agrees well with its empirical value. We have further calculated the phase coexistence curve and obtained the critical exponents beta=0.34 and gamma=1.22, which also agree with their universal values and empirical values derived in the recent experimental efforts.

  17. Self-organized Criticality Model for Ocean Internal Waves

    Institute of Scientific and Technical Information of China (English)

    WANG Gang; LIN Min; QIAO Fang-Li; HOU Yi-Jun

    2009-01-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of-2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed.

  18. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  19. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  20. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  1. Critical dynamics of a nonlocal model and critical behavior of perovskite manganites.

    Science.gov (United States)

    Singh, Rohit; Dutta, Kishore; Nandy, Malay K

    2016-05-01

    We investigate the nonconserved critical dynamics of a nonlocal model Hamiltonian incorporating screened long-range interactions in the quartic term. Employing dynamic renormalization group analysis at one-loop order, we calculate the dynamic critical exponent z=2+εf_{1}(σ,κ,n)+O(ε^{2}) and the linewidth exponent w=-σ+εf_{2}(σ,κ,n)+O(ε^{2}) in the leading order of ε, where ε=4-d+2σ, with d the space dimension, n the number of components in the order parameter, and σ and κ the parameters coming from the nonlocal interaction term. The resulting values of linewidth exponent w for a wide range of σ is found to be in good agreement with the existing experimental estimates from spin relaxation measurements in perovskite manganite samples.

  2. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  3. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  4. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  6. Nursing practice models for acute and critical care: overview of care delivery models.

    Science.gov (United States)

    Shirey, Maria R

    2008-12-01

    This article provides a historical overview of nursing models of care for acute and critical care based on currently available literature. Models of care are defined and their advantages and disadvantages presented. The distinctive differences between care delivery models and professional practice models are explained. The historical overview of care delivery models provides a foundation for the introduction of best practice models that will shape the environment for acute and critical care in the future.

  7. Critical Points in Nuclei and Interacting Boson Model Intrinsic States

    CERN Document Server

    Ginocchio, J N; Ginocchio, Joseph N.

    2003-01-01

    We consider properties of critical points in the interacting boson model, corresponding to flat-bottomed potentials as encountered in a second-order phase transition between spherical and deformed $\\gamma$-unstable nuclei. We show that intrinsic states with an effective $\\beta$-deformation reproduce the dynamics of the underlying non-rigid shapes. The effective deformation can be determined from the the global minimum of the energy surface after projection onto the appropriate symmetry. States of fixed $N$ and good O(5) symmetry projected from these intrinsic states provide good analytic estimates to the exact eigenstates, energies and quadrupole transition rates at the critical point.

  8. Critical behavior of the Schwinger model with Wilson fermions

    CERN Document Server

    Azcoiti, V; Galante, A; Grillo, A F; Laliena, V

    1996-01-01

    We present a detailed analysis, in the framework of the MFA approach of the critical behaviour of the lattice Schwinger model with Wilson fermions on lattices up to 24^2, through the study of the Lee-Yang zeros and the specific heat. We find compelling evidence for a critical line ending at \\kappa = 0.25 at large \\beta. Finite size scaling analysis on lattices 8^2,12^2,16^2, 20^2 and 24^2 indicates a continuous transition. The hyperscaling relation is verified in the explored \\beta region.

  9. Surface critical behavior of the smoothly inhomogeneous Ising model

    Science.gov (United States)

    Burkhardt, Theodore W.; Guim, Ihnsouk

    1984-01-01

    We consider a semi-infinite two-dimensional Ising model with nearest-neighbor coupling constants that deviate from the bulk coupling by Am-y for large m, m being the distance from the edge. The case ALeeuwen. We report exact results for the boundary magnetization and boundary pair-correlation function when A>0. At the bulk critical temperature there is a rich variety of critical behavior in the A -y plane with both paramagnetic and ferromagnetic surface phases. Some of our results can be derived and generalized with simple scaling arguments.

  10. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity

    OpenAIRE

    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas

    2012-01-01

    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  11. Predicting chaotic time series with a partial model.

    Science.gov (United States)

    Hamilton, Franz; Berry, Tyrus; Sauer, Timothy

    2015-07-01

    Methods for forecasting time series are a critical aspect of the understanding and control of complex networks. When the model of the network is unknown, nonparametric methods for prediction have been developed, based on concepts of attractor reconstruction pioneered by Takens and others. In this Rapid Communication we consider how to make use of a subset of the system equations, if they are known, to improve the predictive capability of forecasting methods. A counterintuitive implication of the results is that knowledge of the evolution equation of even one variable, if known, can improve forecasting of all variables. The method is illustrated on data from the Lorenz attractor and from a small network with chaotic dynamics.

  12. A predictive standard model for heavy electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yifeng [Los Alamos National Laboratory; Curro, N J [UC DAVIS; Fisk, Z [UC DAVIS; Pines, D [UC DAVIS

    2010-01-01

    We propose a predictive standard model for heavy electron systems based on a detailed phenomenological two-fluid description of existing experimental data. It leads to a new phase diagram that replaces the Doniach picture, describes the emergent anomalous scaling behavior of the heavy electron (Kondo) liquid measured below the lattice coherence temperature, T*, seen by many different experimental probes, that marks the onset of collective hybridization, and enables one to obtain important information on quantum criticality and the superconducting/antiferromagnetic states at low temperatures. Because T* is {approx} J{sup 2} {rho}/2, the nearest neighbor RKKY interaction, a knowledge of the single-ion Kondo coupling, J, to the background conduction electron density of states, {rho}, makes it possible to predict Kondo liquid behavior, and to estimate its maximum superconducting transition temperature in both existing and newly discovered heavy electron families.

  13. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  14. Controlling self-organized criticality in sandpile models

    CERN Document Server

    Cajueiro, Daniel O

    2013-01-01

    We introduce an external control to reduce the size of avalanches in some sandpile models exhibiting self organized criticality. This rather intuitive approach seems to be missing in the vast literature on such systems. The control action, which amounts to triggering avalanches in sites that are near to be come critical, reduces the probability of very large events, so that energy dissipation occurs most locally. The control is applied to a directed Abelian sandpile model driven by both uncorrelated and correlated deposition. The latter is essential to design an efficient and simple control heuristic, but has only small influence in the uncontrolled avalanche probability distribution. The proposed control seeks a tradeoff between control cost and large event risk. Preliminary results hint that the proposed control works also for an undirected sandpile model.

  15. Magnetic critical behavior of the Ising model on fractal structures

    Science.gov (United States)

    Monceau, Pascal; Perreau, Michel; Hébert, Frédéric

    1998-09-01

    The critical temperature and the set of critical exponents (β,γ,ν) of the Ising model on a fractal structure, namely the Sierpiński carpet, are calculated from a Monte Carlo simulation based on the Wolff algorithm together with the histogram method and finite-size scaling. Both cases of periodic boundary conditions and free edges are investigated. The calculations have been done up to the seventh iteration step of the fractal structure. The results show that, although the structure is not translationally invariant, the scaling behavior of thermodynamical quantities is conserved, which gives a meaning to the finite-size analysis. Although some discrepancies in the values of the critical exponents occur between periodic boundary conditions and free edges, the effective dimension obtained through the Rushbrooke and Josephson's scaling law have the same value in both cases. This value is slightly but significantly different from the fractal dimension.

  16. Prediction of critical heat flux in fuel assemblies using a CHF table method

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)

  17. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  18. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  1. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  2. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  3. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  4. Stability of earthquake clustering models: Criticality and branching ratios

    Science.gov (United States)

    Zhuang, Jiancang; Werner, Maximilian J.; Harte, David S.

    2013-12-01

    We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.

  5. Stability of earthquake clustering models: criticality and branching ratios.

    Science.gov (United States)

    Zhuang, Jiancang; Werner, Maximilian J; Harte, David S

    2013-12-01

    We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.

  6. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  7. Predictive modeling of nanomaterial exposure effects in biological systems

    Directory of Open Access Journals (Sweden)

    Liu X

    2013-09-01

    Full Text Available Xiong Liu,1 Kaizhi Tang,1 Stacey Harper,2 Bryan Harper,2 Jeffery A Steevens,3 Roger Xu1 1Intelligent Automation, Inc., Rockville, MD, USA; 2Department of Environmental and Molecular Toxicology, School of Chemical, Biological, and Environmental Engineering, Oregon State University, Corvallis, OR, USA; 3ERDC Environmental Laboratory, Vicksburg, MS, USA Background: Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods: We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results: We found several important attributes that contribute to the 24 hours post-fertilization (hpf mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of

  8. Modeling of the Critical Micelle Concentration (CMC) of Nonionic Surfactants with an Extended Group-Contribution Method

    DEFF Research Database (Denmark)

    Mattei, Michele; Kontogeorgis, Georgios; Gani, Rafiqul

    2013-01-01

    A group-contribution (GC) property prediction model for estimating the critical micelle concentration (CMC) of nonionic surfactants in water at 25 °C is presented. The model is based on the Marrero and Gani GC method. A systematic analysis of the model performance against experimental data......; and carbohydrate derivate ethers, esters, and thiols. The model developed consists of linear group contributions, and the critical micelle concentration is estimated using the molecular structure of the nonionic surfactant alone. Compared to other models used for the prediction of the critical micelle...... is carried out using data for a wide range of nonionic surfactants covering a wide range of molecular structures. As a result of this procedure, new third order groups based on the characteristic structures of nonionic surfactants are defined and are included in the Marrero and Gani GC model. In this way...

  9. Introducing the Literary Critic: The CARS Model in the Introductions of Academic Papers in Literary Criticism

    Directory of Open Access Journals (Sweden)

    Balázs Sánta

    2015-05-01

    Full Text Available Genre analysis as a “meta-study” is a topic that has been deeply investigated in the field of applied linguistics, one of its more specific areas of interest being research article introductions (RAIs. However, there are still certain kinds of scholarly activity that have received relatively little attention in this regard, such as literary criticism. The paper presents and discusses the results of a small-scale study of the introductions of ten academic essays in this field. The paper’s aim is to see how Swales’ (1990 CARS model can be applied to the rhetorical structure of these RAIs. It is found that at the cost of certain modifications necessitated by the structure of the essays in the corpus, it is not impossible to analyze critical texts produced by scholars belonging in the latter area. The demonstration of this has significance in fulfilling a perceived need for literary criticism to be considered among those disciplines that are worthy of the attention of applied linguistic research.

  10. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  11. A Quasispecies Continuous Contact Model in a Critical Regime

    Science.gov (United States)

    Kondratiev, Yuri; Pirogov, Sergey; Zhizhina, Elena

    2016-04-01

    We study a new non-equilibrium dynamical model: a marked continuous contact model in d-dimensional space (d ge 3). We prove that for certain values of rates (the critical regime) this system has the one-parameter family of invariant measures labelled by the spatial density of particles. Then we prove that the process starting from the marked Poisson measure converges to one of these invariant measures. In contrast with the continuous contact model studied earlier in Kondratiev (Infin Dimens Anal Quantum Probab Relat Top 11(2):231-258, 2008), now the spatial particle density is not a conserved quantity.

  12. Development of a digital reactivity meter for criticality prediction and control rod worth evaluation in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kuramoto, Renato Y.R.; Miranda, Anselmo F.; Valladares, Gastao Lommez; Prado, Adelk C. [Eletrobras Termonuclear S.A. - ELETRONUCLEAR, Angra dos Reis, RJ (Brazil). Central Nuclear Almirante Alvaro Alberto], e-mail: kuramot@eletronuclear.gov.br

    2009-07-01

    In this work, we have proposed the development of a digital reactivity meter in order to monitor subcriticality continuously during criticality approach in a PWR. A subcritical reactivity meter can provide an easy prediction of the estimated critical point prior to reactor criticality, without complicated hand calculation. Moreover, in order to reduce the interval of the Physics Tests from the economical point of view, a subcritical reactivity meter can evaluate the control rod worth from direct subcriticality measurement. In other words, count rate of Source Range (SR) detector recorded during the criticality approach could be used for subcriticality evaluation or control rod worth evaluation. Basically, a digital reactivity meter is based on the inverse solution of the kinetic equations of a reactor with the external neutron source in one-point reactor model. There are some difficulties in the direct application of a digital reactivity meter to the subcriticality measurement. When the Inverse Kinetic method is applied to a sufficiently high power level or to a core without an external neutron source, the neutron source term may be neglected. When applied to a lower power level or in the sub-critical domain, however, the source effects must be taken in account. Furthermore, some treatments are needed in using the count rate of Source Range (SR) detector as input signal to the digital reactivity meter. To overcome these difficulties, we have proposed a digital reactivity meter combined with a methodology of the modified Neutron Source Multiplication (NSM) method with correction factors for subcriticality measurements in PWR. (author)

  13. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  14. Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yingchong Wang

    2015-01-01

    Full Text Available Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating, secondary (steady state creep regime, and tertiary creep (accelerating creep stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0 for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage.

  15. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  16. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  17. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  18. Critical behavior in a stochastic model of vector mediated epidemics

    Science.gov (United States)

    Alfinito, E.; Beccaria, M.; Macorini, G.

    2016-06-01

    The extreme vulnerability of humans to new and old pathogens is constantly highlighted by unbound outbreaks of epidemics. This vulnerability is both direct, producing illness in humans (dengue, malaria), and also indirect, affecting its supplies (bird and swine flu, Pierce disease, and olive quick decline syndrome). In most cases, the pathogens responsible for an illness spread through vectors. In general, disease evolution may be an uncontrollable propagation or a transient outbreak with limited diffusion. This depends on the physiological parameters of hosts and vectors (susceptibility to the illness, virulence, chronicity of the disease, lifetime of the vectors, etc.). In this perspective and with these motivations, we analyzed a stochastic lattice model able to capture the critical behavior of such epidemics over a limited time horizon and with a finite amount of resources. The model exhibits a critical line of transition that separates spreading and non-spreading phases. The critical line is studied with new analytical methods and direct simulations. Critical exponents are found to be the same as those of dynamical percolation.

  19. Critical Curves and Caustics of Triple-lens Models

    CERN Document Server

    Danek, Kamil

    2015-01-01

    Among the 25 planetary systems detected up to now by gravitational microlensing, there are two cases of a star with two planets, and two cases of a binary star with a planet. Other, yet undetected types of triple lenses include triple stars or stars with a planet with a moon. The analysis and interpretation of such events is hindered by the lack of understanding of essential characteristics of triple lenses, such as their critical curves and caustics. We present here analytical and numerical methods for mapping the critical-curve topology and caustic cusp number in the parameter space of $n$-point-mass lenses. We apply the methods to the analysis of four symmetric triple-lens models, and obtain altogether 9 different critical-curve topologies and 32 caustic structures. While these results include various generic types, they represent just a subset of all possible triple-lens critical curves and caustics. Using the analyzed models, we demonstrate interesting features of triple lenses that do not occur in two-p...

  20. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  1. Critical parameters of unrestricted primitive model electrolytes with charge asymmetries up to 10:1

    Science.gov (United States)

    Cheong, Daniel W.; Panagiotopoulos, Athanassios Z.

    2003-10-01

    The phase behavior of charge- and size-asymmetric primitive model electrolytes has been investigated using reservoir grand canonical Monte Carlo simulations. The simulations rely on the insertion and removal of neutral ion clusters from a reservoir of possible configurations. We first validated our approach by investigating the effect of Rc, the maximum allowable distance between the central cation and its associated anions, on the critical parameters of 2:1 and 3:1 electrolytes. We have shown that the effect of Rc is weak and does not change the qualitative dependence of the critical parameters on size and charge asymmetry. The critical temperature for 2:1 and 3:1 electrolytes shows a maximum at Rc≈3, while the critical volume fraction decreases more or less monotonically, consistent with previous results for 1:1 electrolytes by Romero-Enrique et al. [Phys. Rev. E 66, 041204 (2002)]. We have used the reservoir method to obtain the critical parameters for 5:1 and 10:1 electrolytes. The critical temperature decreases with increasing charge asymmetry and shows a maximum as a function of δ, the size asymmetry parameter. The critical volume fraction however, defined as the volume occupied by ions divided by the total volume of the simulation box, increases with increasing charge asymmetry and exhibits a minimum as a function of δ. This trend is contrary to what is generally predicted by theories, although more recent approaches based on the Debye-Hückel theory reproduce this observed trend. Our results deviate somewhat from the predictions of Linse [Philos. Trans. R. Soc. London, Ser. A 359, 853 (2001)] for the scaling of the critical temperature for a system of macroions with point counterions.

  2. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  3. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  4. Modern statistical models for forensic fingerprint examinations: a critical review.

    Science.gov (United States)

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  5. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  6. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  7. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  8. Critical Behavior in a Cellular Automata Animal Disease Transmission Model

    CERN Document Server

    Morley, P D; Chang, Julius

    2003-01-01

    Using a cellular automata model, we simulate the British Government Policy (BGP) in the 2001 foot and mouth epidemic in Great Britain. When clinical symptoms of the disease appeared on a farm, there is mandatory slaughter (culling) of all livestock on an infected premise (IP). Those farms that neighbor an IP (contiguous premise, CP), are also culled, aka nearest neighbor interaction. Farms where the disease may be prevalent from animal, human, vehicle or airborne transmission (dangerous contact, DC), are additionally culled, aka next-to-nearest neighbor iteractions and lightning factor. The resulting mathematical model possesses a phase transition, whereupon if the physical disease transmission kernel exceeds a critical value, catastrophic loss of animals ensues. The non-local disease transport probability can be as low as .01% per day and the disease can still be in the high mortality phase. We show that the fundamental equation for sustainable disease transport is the criticality equation for neutron fissio...

  9. From Safety Critical Java Programs to Timed Process Models

    DEFF Research Database (Denmark)

    Thomsen, Bent; Luckow, Kasper Søe; Thomsen, Lone Leth

    2015-01-01

    built and the tools have been used to analyse a number of systems for properties such as worst case execution time, schedulability and energy optimization [12–14,19,34,36,38]. In this paper we will elaborate on the theoretical underpinning of the translation from Java programs to timed automata models...... frameworks, we have in recent years pursued an agenda of translating hard-real-time embedded safety critical programs written in the Safety Critical Java Profile [33] into networks of timed automata [4] and subjecting those to automated analysis using the UPPAAL model checker [10]. Several tools have been...... and briefly summarize some of the results based on this translation. Furthermore, we discuss future work, especially relations to the work in [16,24] as Java recently has adopted first class higher order functions in the form of lambda abstractions....

  10. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  11. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  12. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  13. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  14. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  15. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    Science.gov (United States)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  16. Impaired High-Density Lipoprotein Anti-Oxidant Function Predicts Poor Outcome in Critically Ill Patients.

    Directory of Open Access Journals (Sweden)

    Lore Schrutka

    Full Text Available Oxidative stress affects clinical outcome in critically ill patients. Although high-density lipoprotein (HDL particles generally possess anti-oxidant capacities, deleterious properties of HDL have been described in acutely ill patients. The impact of anti-oxidant HDL capacities on clinical outcome in critically ill patients is unknown. We therefore analyzed the predictive value of anti-oxidant HDL function on mortality in an unselected cohort of critically ill patients.We prospectively enrolled 270 consecutive patients admitted to a university-affiliated intensive care unit (ICU and determined anti-oxidant HDL function using the HDL oxidant index (HOI. Based on their HOI, the study population was stratified into patients with impaired anti-oxidant HDL function and the residual study population.During a median follow-up time of 9.8 years (IQR: 9.2 to 10.0, 69% of patients died. Cox regression analysis revealed a significant and independent association between impaired anti-oxidant HDL function and short-term mortality with an adjusted HR of 1.65 (95% CI 1.22-2.24; p = 0.001 as well as 10-year mortality with an adj. HR of 1.19 (95% CI 1.02-1.40; p = 0.032 when compared to the residual study population. Anti-oxidant HDL function correlated with the amount of oxidative stress as determined by Cu/Zn superoxide dismutase (r = 0.38; p<0.001.Impaired anti-oxidant HDL function represents a strong and independent predictor of 30-day mortality as well as long-term mortality in critically ill patients.

  17. Critical Infrastructure Protection and Resilience Literature Survey: Modeling and Simulation

    Science.gov (United States)

    2014-11-01

    produced the N- ABLE tool which is used to simulate critical infrastructure interdependencies of businesses in the U.S. economy . Idaho National Laboratory...International Journal of Risk Assessment and Management. 2006;6(4-6):423-439. 32. Zale JJ KB. A GIS-based football stadium evacuation model...of IT based disasters on the interdependent sectors of the US economy . In: proceedings from IEEE Systems and Information Engineering Design

  18. Contact prediction in protein modeling: Scoring, folding and refinement of coarse-grained models

    Directory of Open Access Journals (Sweden)

    Kolinski Andrzej

    2008-08-01

    Full Text Available Abstract Background Several different methods for contact prediction succeeded within the Sixth Critical Assessment of Techniques for Protein Structure Prediction (CASP6. The most relevant were non-local contact predictions for targets from the most difficult categories: fold recognition-analogy and new fold. Such contacts could provide valuable structural information in case a template structure cannot be found in the PDB. Results We described comprehensive tests of the effectiveness of contact data in various aspects of de novo modeling with CABS, an algorithm which was used successfully in CASP6 by the Kolinski-Bujnicki group. We used the predicted contacts in a simple scoring function for the post-simulation ranking of protein models and as a soft bias in the folding simulations and in the fold-refinement procedure. The latter approach turned out to be the most successful. The CABS force field used in the Replica Exchange Monte Carlo simulations cooperated with the true contacts and discriminated the false ones, which resulted in an improvement of the majority of Kolinski-Bujnicki's protein models. In the modeling we tested different sets of predicted contact data submitted to the CASP6 server. According to our results, the best performing were the contacts with the accuracy balanced with the coverage, obtained either from the best two predictors only or by a consensus from as many predictors as possible. Conclusion Our tests have shown that theoretically predicted contacts can be very beneficial for protein structure prediction. Depending on the protein modeling method, a contact data set applied should be prepared with differently balanced coverage and accuracy of predicted contacts. Namely, high coverage of contact data is important for the model ranking and high accuracy for the folding simulations.

  19. Committee neural network model for rock permeability prediction

    Science.gov (United States)

    Bagheripour, Parisa

    2014-05-01

    Quantitative formulation between conventional well log data and rock permeability, undoubtedly the most critical parameter of hydrocarbon reservoir, could be a potent tool for solving problems associated with almost all tasks involved in petroleum engineering. The present study proposes a novel approach in charge of the quest for high-accuracy method of permeability prediction. At the first stage, overlapping of conventional well log data (inputs) was eliminated by means of principal component analysis (PCA). Subsequently, rock permeability was predicted from extracted PCs using multi-layer perceptron (MLP), radial basis function (RBF), and generalized regression neural network (GRNN). Eventually, a committee neural network (CNN) was constructed by virtue of genetic algorithm (GA) to enhance the precision of ultimate permeability prediction. The values of rock permeability, derived from the MPL, RBF, and GRNN models, were used as inputs of CNN. The proposed CNN combines results of different ANNs to reap beneficial advantages of all models and consequently producing more accurate estimations. The GA, embedded in the structure of the CNN assigns a weight factor to each ANN which shows relative involvement of each ANN in overall prediction of rock permeability from PCs of conventional well logs. The proposed methodology was applied in Kangan and Dalan Formations, which are the major carbonate reservoir rocks of South Pars Gas Field-Iran. A group of 350 data points was used to establish the CNN model, and a group of 245 data points was employed to assess the reliability of constructed CNN model. Results showed that the CNN method performed better than individual intelligent systems performing alone.

  20. Models to predict intestinal absorption of therapeutic peptides and proteins.

    Science.gov (United States)

    Antunes, Filipa; Andrade, Fernanda; Ferreira, Domingos; Nielsen, Hanne Morck; Sarmento, Bruno

    2013-01-01

    Prediction of human intestinal absorption is a major goal in the design, optimization, and selection of drugs intended for oral delivery, in particular proteins, which possess intrinsic poor transport across intestinal epithelium. There are various techniques currently employed to evaluate the extension of protein absorption in the different phases of drug discovery and development. Screening protocols to evaluate protein absorption include a range of preclinical methodologies like in silico, in vitro, in situ, ex vivo and in vivo. It is the careful and critical use of these techniques that can help to identify drug candidates, which most probably will be well absorbed from the human intestinal tract. It is well recognized that the human intestinal permeability cannot be accurately predicted based on a single preclinical method. However, the present social and scientific concerns about the animal well care as well as the pharmaceutical industries need for rapid, cheap and reliable models predicting bioavailability give reasons for using methods providing an appropriate correlation between results of in vivo and in vitro drug absorption. The aim of this review is to describe and compare in silico, in vitro, in situ, ex vivo and in vivo methods used to predict human intestinal absorption, giving a special attention to the intestinal absorption of therapeutic peptides and proteins.

  1. Critical endpoint for deconfinement in matrix and other effective models

    CERN Document Server

    Kashiwa, Kouji; Skokov, Vladimir V

    2012-01-01

    We consider the position of the deconfining critical endpoint, where the first order transition for deconfinement is washed out by the presence of massive, dynamical quarks. We use an effective matrix model, employed previously to analyze the transition in the pure glue theory. If the param- eters of the pure glue theory are unaffected by the presence of dynamical quarks, and if the quarks only contribute perturbatively, then for three colors and three degenerate quark flavors this quark mass is very heavy, m_de \\sim 2.5 GeV, while the critical temperature, T_de, barely changes, \\sim 1% below that in the pure glue theory. The location of the deconfining critical endpoint is a sensitive test to differentiate between effective models. For example, models with a logarithmic potential for the Polyakov loop give much smaller values of the quark mass, m_de \\sim 1 GeV, and a large shift in T_de \\sim 10% lower than that in the pure glue theory.

  2. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  3. Queuing theory accurately models the need for critical care resources.

    Science.gov (United States)

    McManus, Michael L; Long, Michael C; Cooper, Abbot; Litvak, Eugene

    2004-05-01

    Allocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods. The authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size. The queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays. The stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds.

  4. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  5. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  6. Exchange Rate Prediction using Neural – Genetic Model

    Directory of Open Access Journals (Sweden)

    MECHGOUG Raihane

    2012-10-01

    Full Text Available Neural network have successfully used for exchange rate forecasting. However, due to a large number of parameters to be estimated empirically, it is not a simple task to select the appropriate neural network architecture for exchange rate forecasting problem.Researchers often overlook the effect of neural network parameters on the performance of neural network forecasting. The performance of neural network is critically dependant on the learning algorithms, thenetwork architecture and the choice of the control parameters. Even when a suitable setting of parameters (weight can be found, the ability of the resulting network to generalize the data not seen during learning may be far from optimal. For these reasons it seemslogical and attractive to apply genetic algorithms. Genetic algorithms may provide a useful tool for automating the design of neural network. The empirical results on foreign exchange rate prediction indicate that the proposed hybrid model exhibits effectively improved accuracy, when is compared with some other time series forecasting models.

  7. Mixing height computation from a numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science

    2004-07-01

    Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)

  8. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  9. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  10. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  11. Testing a model for the critical degree of saturation at freezing of porous building materials

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    during freezing. The reliability and usefulness of the model are discussed, e.g. in relation to air-entrained materials and in relation to the description of the pore structure.Keywords: Brick tile, concrete, critical degree of saturation, eigenstrain, fracture mechanics, frost resistance, pore structure......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction of SCR based on fracture mechanics and phase...... of elasticity, tensile strength, amount of freezable water, thermal expansion coefficients and parameters characterizing the pore structure and its effect on strength, modulus of elasticity and volumetric expansion. For the present, the model assumes non air-entrained homogeneous materials subjected to freeze...

  12. Modelling critical degrees of saturation of porous building materials subjected to freezing

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... the pore structure and its effect on strength, modulus of elasticity and volumetric expansion. Also the amount of freezable water and thermal expansion coefficients are involved. For the present, the model assumes non air-entrained homogeneous materials subjected to freeze-thaw without de-icing salts.......The model has been tested on various concretes without air-entrainment and on brick tiles with different porosities. Results agree qualitatively with values of the critical degree of saturation determined by measuring resonance frequencies and length change of sealed specimens during freezing...

  13. Characterizing climate predictability and model response variability from multiple initial condition and multi-model ensembles

    CERN Document Server

    Kumar, Devashish

    2016-01-01

    Climate models are thought to solve boundary value problems unlike numerical weather prediction, which is an initial value problem. However, climate internal variability (CIV) is thought to be relatively important at near-term (0-30 year) prediction horizons, especially at higher resolutions. The recent availability of significant numbers of multi-model (MME) and multi-initial condition (MICE) ensembles allows for the first time a direct sensitivity analysis of CIV versus model response variability (MRV). Understanding the relative agreement and variability of MME and MICE ensembles for multiple regions, resolutions, and projection horizons is critical for focusing model improvements, diagnostics, and prognosis, as well as impacts, adaptation, and vulnerability studies. Here we find that CIV (MICE agreement) is lower (higher) than MRV (MME agreement) across all spatial resolutions and projection time horizons for both temperature and precipitation. However, CIV dominates MRV over higher latitudes generally an...

  14. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  15. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  16. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  17. Critical Curves and Caustics of Triple-lens Models

    Science.gov (United States)

    Daněk, Kamil; Heyrovský, David

    2015-06-01

    Among the 25 planetary systems detected up to now by gravitational microlensing, there are two cases of a star with two planets, and two cases of a binary star with a planet. Other, yet undetected types of triple lenses include triple stars or stars with a planet with a moon. The analysis and interpretation of such events is hindered by the lack of understanding of essential characteristics of triple lenses, such as their critical curves and caustics. We present here analytical and numerical methods for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses. We apply the methods to the analysis of four symmetric triple-lens models, and obtain altogether 9 different critical-curve topologies and 32 caustic structures. While these results include various generic types, they represent just a subset of all possible triple-lens critical curves and caustics. Using the analyzed models, we demonstrate interesting features of triple lenses that do not occur in two-point-mass lenses. We show an example of a lens that cannot be described by the Chang-Refsdal model in the wide limit. In the close limit we demonstrate unusual structures of primary and secondary caustic loops, and explain the conditions for their occurrence. In the planetary limit we find that the presence of a planet may lead to a whole sequence of additional caustic metamorphoses. We show that a pair of planets may change the structure of the primary caustic even when placed far from their resonant position at the Einstein radius.

  18. Predictions of Radionuclide Dose Rates from Sellafield Discharges using a Compartmental Model

    Energy Technology Data Exchange (ETDEWEB)

    McCubbin, D.; Leonard, K.S.; Gurbutt, P.A.; Round, G.D

    1998-07-01

    A multi-compartmental model (MIRMAID) of the Irish Sea has been used to predict radionuclide dose rates to the public, via seafood consumption pathways. Radionuclides originate from the authorised discharge of low level liquid effluent from the BNF plc nuclear reprocessing plant at Sellafield. The model has been used to predict combined annual doses, the contribution of dose from individual radionuclides and to discriminate dose between present day and historic discharges. An assessment has been carried out to determine the sensitivity of the predictions to changes in various model parameters. The predicted dose to the critical group from seafood consumption in 1995 ranged from 37-96 {mu}Sv of which the majority originated from current discharges. The contribution from {sup 99}Tc was predicted to have increased from 0.2% in 1993 up to 20% in 1995. The predicted contribution of Pu and Am from historic discharges is underestimated in the model. (author)

  19. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  20. Changes in Circulating Procalcitonin Versus C-Reactive Protein in Predicting Evolution of Infectious Disease in Febrile, Critically Ill Patients

    NARCIS (Netherlands)

    S.H. Hoeboer (Sandra); A.B.J. Groeneveld (Johan)

    2013-01-01

    textabstractObjective:Although absolute values for C-reactive protein (CRP) and procalcitonin (PCT) are well known to predict sepsis in the critically ill, it remains unclear how changes in CRP and PCT compare in predicting evolution of: infectious disease, invasiveness and severity (e.g. developmen

  1. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  2. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  3. Predictive value of gastric intramucosal pH for critical patients

    Institute of Scientific and Technical Information of China (English)

    Hong Tao; Bing Wen Jing; Shu Zhen Li; Xiang Yu Zhang

    2000-01-01

    AIM To observe the predictive value of gastric intramucosal pH (pHi) for critical patients.METHODS The gastric intramucosal pH (pHi) of 32 ICU patients was measured with a self-madegastrointestinal tonometer, and the APACHE l score was determined simultaneously.RESULTS pHi of the nonsurvivors was significantly lower than that of the survivors (P<0.05). The pHiwas remarkably higher in the nonsepsis group than in the sepsis group (P<0.01). Only in multiple organfailure groups, pHi was found statistically lower (P<0.05).CONCLUSION pHi may be the most simple, reliable, sensitive and accurate parameter to indicate theadequacy of tissue oxygenation, and it may be widely used in ICU monitoring in near future.

  4. Infarct volume predicts critical care needs in stroke patients treated with intravenous thrombolysis

    Energy Technology Data Exchange (ETDEWEB)

    Faigle, Roland; Marsh, Elisabeth B.; Llinas, Rafael H.; Urrutia, Victor C. [Johns Hopkins University School of Medicine, Department of Neurology, Baltimore, MD (United States); Wozniak, Amy W. [Johns Hopkins University, Department of Biostatistics, Bloomberg School of Public Health, Baltimore, MD (United States)

    2014-10-26

    Patients receiving intravenous thrombolysis with recombinant tissue plasminogen activator (IVT) for ischemic stroke are monitored in an intensive care unit (ICU) or a comparable unit capable of ICU interventions due to the high frequency of standardized neurological exams and vital sign checks. The present study evaluates quantitative infarct volume on early post-IVT MRI as a predictor of critical care needs and aims to identify patients who may not require resource intense monitoring. We identified 46 patients who underwent MRI within 6 h of IVT. Infarct volume was measured using semiautomated software. Logistic regression and receiver operating characteristics (ROC) analysis were used to determine factors associated with ICU needs. Infarct volume was an independent predictor of ICU need after adjusting for age, sex, race, systolic blood pressure, NIH Stroke Scale (NIHSS), and coronary artery disease (odds ratio 1.031 per cm{sup 3} increase in volume, 95 % confidence interval [CI] 1.004-1.058, p = 0.024). The ROC curve with infarct volume alone achieved an area under the curve (AUC) of 0.766 (95 % CI 0.605-0.927), while the AUC was 0.906 (95 % CI 0.814-0.998) after adjusting for race, systolic blood pressure, and NIHSS. Maximum Youden index calculations identified an optimal infarct volume cut point of 6.8 cm{sup 3} (sensitivity 75.0 %, specificity 76.7 %). Infarct volume greater than 3 cm{sup 3} predicted need for critical care interventions with 81.3 % sensitivity and 66.7 % specificity. Infarct volume may predict needs for ICU monitoring and interventions in stroke patients treated with IVT. (orig.)

  5. A self-organized critical model for evolution

    Energy Technology Data Exchange (ETDEWEB)

    Flyvbjerg, H.; Bak, P.; Jensen, M.H.; Sneppen, K.

    1996-01-01

    A simple mathematical model of biological macroevolution is presented. It describes an ecology of adapting, interacting species. Species evolve to maximize their individual fitness in their environment. The environment of any given species is affected by other evolving species; hence it is not constant in time. The ecology evolves to a ``self-organized critical`` state where periods of stasis alternate with avalanches of causally connected evolutionary changes. This characteristic intermittent behaviour of natural history, known as ``punctuated equilibrium,`` thus finds a theoretical explanation as a selforganized critical phenomenon. In particular, large bursts of apparently simultaneous evolutionary activity require no external cause. They occur as the less frequent result of the very same dynamics that governs the more frequent small-scale evolutionary activity. Our results are compared with data from the fossil record collected by J. Sepkoski, Jr., and others.

  6. Robust criticality of Ising model on rewired directed networks

    CERN Document Server

    Lipowski, Adam; Lipowska, Dorota

    2015-01-01

    We show that preferential rewiring, which is supposed to mimick the behaviour of financial agents, changes a directed-network Ising ferromagnet with a single critical point into a model with robust critical behaviour. For the non-rewired random graph version, due to a constant number of links out-going from each site, we write a simple mean-field-like equation describing the behaviour of magnetization; we argue that it is exact and support the claim with extensive Monte Carlo simulations. For the rewired version, this equation is obeyed only at low temperatures. At higher temperatures, rewiring leads to strong heterogeneities, which apparently invalidates mean-field arguments and induces large fluctuations and divergent susceptibility. Such behaviour is traced back to the formation of a relatively small core of agents which influence the entire system.

  7. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  8. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  9. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  10. Critical dynamics of cluster algorithms in the dilute Ising model

    Science.gov (United States)

    Hennecke, M.; Heyken, U.

    1993-08-01

    Autocorrelation times for thermodynamic quantities at T C are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wang and Wolff cluster algorithms. Our results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. We conclude that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected.

  11. Self-organized criticality model for brain plasticity.

    Science.gov (United States)

    de Arcangelis, Lucilla; Perrone-Capano, Carla; Herrmann, Hans J

    2006-01-20

    Networks of living neurons exhibit an avalanche mode of activity, experimentally found in organotypic cultures. Here we present a model that is based on self-organized criticality and takes into account brain plasticity, which is able to reproduce the spectrum of electroencephalograms (EEG). The model consists of an electrical network with threshold firing and activity-dependent synapse strengths. The system exhibits an avalanche activity in a power-law distribution. The analysis of the power spectra of the electrical signal reproduces very robustly the power-law behavior with the exponent 0.8, experimentally measured in EEG spectra. The same value of the exponent is found on small-world lattices and for leaky neurons, indicating that universality holds for a wide class of brain models.

  12. A Fisher’s Criterion-Based Linear Discriminant Analysis for Predicting the Critical Values of Coal and Gas Outbursts Using the Initial Gas Flow in a Borehole

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2017-01-01

    Full Text Available The risk of coal and gas outbursts can be predicted using a method that is linear and continuous and based on the initial gas flow in the borehole (IGFB; this method is significantly superior to the traditional point prediction method. Acquiring accurate critical values is the key to ensuring accurate predictions. Based on ideal rock cross-cut coal uncovering model, the IGFB measurement device was developed. The present study measured the data of the initial gas flow over 3 min in a 1 m long borehole with a diameter of 42 mm in the laboratory. A total of 48 sets of data were obtained. These data were fuzzy and chaotic. Fisher’s discrimination method was able to transform these spatial data, which were multidimensional due to the factors influencing the IGFB, into a one-dimensional function and determine its critical value. Then, by processing the data into a normal distribution, the critical values of the outbursts were analyzed using linear discriminant analysis with Fisher’s criterion. The weak and strong outbursts had critical values of 36.63 L and 80.85 L, respectively, and the accuracy of the back-discriminant analysis for the weak and strong outbursts was 94.74% and 92.86%, respectively. Eight outburst tests were simulated in the laboratory, the reverse verification accuracy was 100%, and the accuracy of the critical value was verified.

  13. The critical power function is dependent on the duration of the predictive exercise tests chosen.

    Science.gov (United States)

    Bishop, D; Jenkins, D G; Howard, A

    1998-02-01

    The linear relationship between work accomplished (W(lim)) and time to exhaustion (t(lim)) can be described by the equation: W(lim) = a + CP x t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five all-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W(lim)-t(lim) regression and calculated three ways: 1) using the first, third and fifth W(lim)-t(lim) coordinates (I135), 2) using coordinates from the three highest power outputs (I123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0+/-37.9W) > CPI135 (176.1+/-27.6W) > CPI345 (164.0+/-22.8W) (P<0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P<0.05). The shorter the predictive trials, the greater the slope of the W(lim)-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain "for a very long time without fatigue" then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.

  14. Circulating MicroRNA-223 Serum Levels Do Not Predict Sepsis or Survival in Patients with Critical Illness

    Directory of Open Access Journals (Sweden)

    Fabian Benz

    2015-01-01

    Full Text Available Background and Aims. Dysregulation of miR-223 was recently linked to various diseases associated with systemic inflammatory responses such as type 2 diabetes, cancer, and bacterial infections. However, contradictory results are available on potential alterations of miR-223 serum levels during sepsis. We thus aimed to evaluate the diagnostic and prognostic value of miR-223 serum concentrations in patients with critical illness and sepsis. Methods. We used i.v. injection of lipopolysaccharide (LPS as well as cecal pole ligation and puncture (CLP for induction of polymicrobial sepsis in mice and measured alterations in serum levels of miR-223. These results from mice were translated into a large and well-characterized cohort of critically ill patients admitted to the medical intensive care unit (ICU. Finally, results from analysis in patients were correlated with clinical data and extensive sets of routine and experimental biomarkers. Results. Although LPS injection induced moderately elevated serum miR-223 levels in mice, no significant alterations in miR-223 serum levels were found in mice after CLP-induced sepsis. In accordance with these results from animal models, serum miR-223 levels did not differ between critically ill patients and healthy controls. However, ICU patients with more severe disease (APACHE-II score showed moderately reduced circulating miR-223. Strikingly, no differences in miR-223 levels were found in critically ill patients with or without sepsis, and serum levels of miR-223 did not correlate with classical markers of inflammation or bacterial infection. Finally, low miR-223 serum levels were moderately associated with an unfavorable prognosis of patients during the ICU treatment but did not predict long-term mortality. Conclusion. Recent reports on alterations in miR-223 serum levels during sepsis revealed contradictory results, preventing a potential use of this miRNA in clinical routine. We clearly show that miR-223 serum

  15. Critical noise of majority-vote model on complex networks

    CERN Document Server

    Chen, Hanshuang; He, Gang; Zhang, Haifeng; Hou, Zhonghuai

    2016-01-01

    The majority-vote model with noise is one of the simplest nonequilibrium statistical model that has been extensively studied in the context of complex networks. However, the relationship between the critical noise where the order-disorder phase transition takes place and the topology of the underlying networks is still lacking. In the paper, we use the heterogeneous mean-field theory to derive the rate equation for governing the model's dynamics that can analytically determine the critical noise $f_c$ in the limit of infinite network size $N\\rightarrow \\infty$. The result shows that $f_c$ depends on the ratio of ${\\left\\langle k \\right\\rangle }$ to ${\\left\\langle k^{3/2} \\right\\rangle }$, where ${\\left\\langle k \\right\\rangle }$ and ${\\left\\langle k^{3/2} \\right\\rangle }$ are the average degree and the $3/2$ order moment of degree distribution, respectively. Furthermore, we consider the finite size effect where the stochastic fluctuation should be involved. To the end, we derive the Langevin equation and obtai...

  16. Crossover Equation of State Models Applied to the Critical Behavior of Xenon

    Science.gov (United States)

    Garrabos, Y.; Lecoutre, C.; Marre, S.; Guillaument, R.; Beysens, D.; Hahn, I.

    2015-03-01

    The turbidity () measurements of Güttinger and Cannell (Phys Rev A 24:3188-3201, 1981) in the temperature range along the critical isochore of homogeneous xenon are reanalyzed. The singular behaviors of the isothermal compressibility () and the correlation length () predicted from the master crossover functions are introduced in the turbidity functional form derived by Puglielli and Ford (Phys Rev Lett 25:143-146, 1970). We show that the turbidity data are thus well represented by the Ornstein-Zernike approximant, within 1 % precision. We also introduce a new crossover master model (CMM) of the parametric equation of state for a simple fluid system with no adjustable parameter. The CMM model and the phenomenological crossover parametric model are compared with the turbidity data and the coexisting liquid-gas density difference (). The excellent agreement observed for , , , and in a finite temperature range well beyond the Ising-like preasymptotic domain confirms that the Ising-like critical crossover behavior of xenon can be described in conformity with the universal features estimated by the renormalization-group methods. Only 4 critical coordinates of the vapor-liquid critical point are needed in the (pressure, temperature, molecular volume) phase surface of xenon.

  17. A Monte Carlo method for critical systems in infinite volume: the planar Ising model

    CERN Document Server

    Herdeiro, Victor

    2016-01-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  18. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  19. The critical properties of the agent-based model with environmental-economic interactions

    Science.gov (United States)

    Kuscsik, Z.; Horváth, D.

    2008-05-01

    The steady-state and nonequilibrium properties of the model of environmental-economic interactions are studied. The interacting heterogeneous agents are simulated on the platform of the emission dynamics of cellular automaton. The diffusive emissions are produced by the factory agents, and the local pollution is monitored by the randomly walking (mobile) sensors. When the threshold concentration is exceeded, a feedback signal is transmitted from the sensor to the nearest factory that affects its actual production rate. The model predicts the discontinuous phase transition between safe and catastrophic ecology. Right at the critical line, the broad-scale power-law distributions of emission rates have been identified. The power-law fluctuations are triggered by the screening effect of factories and by the time delay between the environment contamination and its detection. The system shows the typical signs of the self-organized critical systems, such as power-law distributions and scaling laws.

  20. Critical mingling and universal correlations in model binary active liquids

    Science.gov (United States)

    Bain, Nicolas; Bartolo, Denis

    2017-06-01

    Ensembles of driven or motile bodies moving along opposite directions are generically reported to self-organize into strongly anisotropic lanes. Here, building on a minimal model of self-propelled bodies targeting opposite directions, we first evidence a critical phase transition between a mingled state and a phase-separated lane state specific to active particles. We then demonstrate that the mingled state displays algebraic structural correlations also found in driven binary mixtures. Finally, constructing a hydrodynamic theory, we single out the physical mechanisms responsible for these universal long-range correlations typical of ensembles of oppositely moving bodies.

  1. Critical Exponents of Ferromagnetic Ising Model on Fractal Lattices

    Science.gov (United States)

    Hsiao, Pai-Yi

    2001-04-01

    We review the value of the critical exponents ν-1, β/ν, and γ/ν of ferromagnetic Ising model on fractal lattices of Hausdorff dimension between one and three. They are obtained by Monte Carlo simulation with the help of Wolff algorithm. The results are accurate enough to show that the hyperscaling law df = 2β/ν + γ/ν is satisfied in non-integer dimension. Nevertheless, the discrepancy between the simulation results and the γ-expansion studies suggests that the strong universality should be adapted for the fractal lattices.

  2. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  3. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  4. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  5. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  6. Defects in the tri-critical Ising model

    Science.gov (United States)

    Makabe, Isao; Watts, Gérard M. T.

    2017-09-01

    We consider two different conformal field theories with central charge c = 7 /10. One is the diagonal invariant minimal model in which all fields have integer spins; the other is the local fermionic theory with superconformal symmetry in which fields can have half-integer spin. We construct new conformal (but not topological or factorised) defects in the minimal model. We do this by first constructing defects in the fermionic model as boundary conditions in a fermionic theory of central charge c = 7 /5, using the folding trick as first proposed by Gang and Yamaguchi [1]. We then act on these with interface defects to find the new conformal defects. As part of the construction, we find the topological defects in the fermionic theory and the interfaces between the fermionic theory and the minimal model. We also consider the simpler case of defects in the theory of a single free fermion and interface defects between the Ising model and a single fermion as a prelude to calculations in the tri-critical Ising model.

  7. Integrable modification of the critical Chalker-Coddington network model

    Science.gov (United States)

    Ikhlef, Yacine; Fendley, Paul; Cardy, John

    2011-10-01

    We consider the Chalker-Coddington network model for the integer quantum Hall effect, and examine the possibility of solving it exactly. In the supersymmetric path integral framework, we introduce a truncation procedure, leading to a series of well-defined two-dimensional loop models with two loop flavors. In the phase diagram of the first-order truncated model, we identify four integrable branches related to the dilute Birman-Wenzl-Murakami braid-monoid algebra and parameterized by the loop fugacity n. In the continuum limit, two of these branches (1,2) are described by a pair of decoupled copies of a Coulomb-gas theory, whereas the other two branches (3,4) couple the two loop flavors, and relate to an SU(2)r×SU(2)r/SU(2)2r Wess-Zumino-Witten (WZW) coset model for the particular values n=-2cos[π/(r+2)], where r is a positive integer. The truncated Chalker-Coddington model is the n=0 point of branch 4. By numerical diagonalization, we find that its universality class is neither an analytic continuation of the WZW coset nor the universality class of the original Chalker-Coddington model. It constitutes rather an integrable, critical approximation to the latter.

  8. Net reclassification indices for evaluating risk prediction instruments: a critical review.

    Science.gov (United States)

    Kerr, Kathleen F; Wang, Zheyu; Janes, Holly; McClelland, Robyn L; Psaty, Bruce M; Pepe, Margaret S

    2014-01-01

    Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For predefined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true- and false-positive rates. We advocate the use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid predefined risk categories. However, it experiences many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods

  9. Critical behavior of the random-bond clock model

    Science.gov (United States)

    Wu, Raymond P. H.; Lo, Veng-cheong; Huang, Haitao

    2012-09-01

    The critical behavior of the clock model in two-dimensional square lattice is studied numerically using Monte Carlo method with Wolff algorithm. The Kosterlitz-Thouless (KT) transition is observed in the 8-state clock model, where an intermediate phase exists between the low-temperature ordered phase and the high-temperature disordered phase. The bond randomness is introduced to the system by assuming a Gaussian distribution for the coupling coefficients with the mean μ = 1 and different values of variance: from σ2 = 0.1 to σ2 = 3.0. An abrupt jump in the helicity modulus at the transition, which is the key characteristic of the KT transition, is verified with a stability argument. Our results show that, a small amount of disorder (small σ) reduces the critical temperature of the system, without altering the nature of transition. However, a larger amount of disorder changes the transition from the KT-type into that of non-KT-type.

  10. Critical rotation of general-relativistic polytropic models revisited

    Science.gov (United States)

    Geroyannis, V.; Karageorgopoulos, V.

    2013-09-01

    We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.

  11. Modeling financial markets by self-organized criticality

    Science.gov (United States)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2015-10-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally, we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  12. Modeling financial markets by self-organized criticality.

    Science.gov (United States)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2015-10-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally, we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  13. Modelling Financial Markets by Self-Organized Criticality

    CERN Document Server

    Biondo, A E; Rapisarda, A

    2015-01-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  16. Hierarchical, model-based risk management of critical infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Baiardi, F. [Polo G.Marconi La Spezia, Universita di Pisa, Pisa (Italy); Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)], E-mail: f.baiardi@unipi.it; Telmon, C.; Sgandurra, D. [Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)

    2009-09-15

    Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.

  17. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  18. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    Science.gov (United States)

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  19. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  20. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  1. Physics-Informed Machine Learning for Predictive Turbulence Modeling: Using Data to Improve RANS Modeled Reynolds Stresses

    CERN Document Server

    Wang, Jian-Xun; Xiao, Heng

    2016-01-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Recently, data-driven methods have been proposed as a promising alternative to the traditional approaches of turbulence model development. In this work we propose a data-driven, physics-informed machine learning approach for predicting discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are first trained with benchmark flow data and then used to predict Reynolds stresses discrepancies in new flows. The method is used to predict the Reynolds stresses in the flow over periodic hills by using two training flow scenarios of increasing difficulties: (1) ...

  2. Prediction on Critical Micelle Concentration of Nonionic Surfactants in Aqueous Solution: Quantitative Structure-Property Relationship Approach

    Institute of Scientific and Technical Information of China (English)

    王正武; 黄东阳; 宫素萍; 李干佐

    2003-01-01

    In order to predict the critical micelle concentration (cmc) of nonionic surfactants in aqueous solution, a quantitative structure-property relationship (QSPR) was found for 77 nonionic surfactants belonging to eight series. The best-regressed model contained four quantum-chemical descriptors, the heat of formation (△H), the molecular dipole moment (D), the energy of the lowest unoccupied molecular orbital (ELUMO) and the energy of the highest occupied molecular orbital (EHOMO) of the surfactant molecule; two constitutional descriptors, the molecular weight of surfactant (M) and the number of oxygen and nitrogen atoms (nON ) of the hydrophilic fragment of surfactant molecule; and one topological descriptor, the Kier & Hall index of zero order (KH0) of the hydrophobic fragment of the surfactant. The established general QSPR between Ig (cmc) and the descriptors produced a relevant coefficient of multiple determination: R2=0.986. When cross terms were considered, the corresponding best model contained five descriptors ELUMO, D,KH0, M and a cross term nON·KH0, Which also produced the same coefficient as the seven-parameter model.

  3. Theoretical Uncertainties due to AGN Subgrid Models in Predictions of Galaxy Cluster Observable Properties

    CERN Document Server

    Yang, H -Y K; Ricker, P M

    2012-01-01

    Cosmological constraints derived from galaxy clusters rely on accurate predictions of cluster observable properties, in which feedback from active galactic nuclei (AGN) is a critical component. In order to model the physical effects due to supermassive black holes (SMBH) on cosmological scales, subgrid modeling is required, and a variety of implementations have been developed in the literature. However, theoretical uncertainties due to model and parameter variations are not yet well understood, limiting the predictive power of simulations including AGN feedback. By performing a detailed parameter sensitivity study in a single cluster using several commonly-adopted AGN accretion and feedback models with FLASH, we quantify the model uncertainties in predictions of cluster integrated properties. We find that quantities that are more sensitive to gas density have larger uncertainties (~20% for Mgas and a factor of ~2 for Lx at R500), whereas Tx, Ysz, and Yx are more robust (~10-20% at R500). To make predictions b...

  4. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  5. A critical discussion on the applicability of Compound Topographic Index (CTI) for predicting ephemeral gully erosion

    Science.gov (United States)

    Casalí, Javier; Chahor, Youssef; Giménez, Rafael; Campo-Bescós, Miguel

    2016-04-01

    The so-called Compound Topographic Index (CTI) can be calculated for each grid cell in a DEM and be used to identify potential locations for ephemeral gullies (e. g.) based on land topography (CTI = A.S.PLANC, where A is upstream drainage area, S is local slope and PLANC is planform curvature, a measure of the landscape convergence) (Parker et al., 2007). It can be shown that CTI represents stream power per unit bed area and it considers the major parameters controlling the pattern and intensity of concentrated surface runoff in the field (Parker et al., 2007). However, other key variables controlling e.g. erosion (e. g. e.) such as soil characteristics, land-use and management, are not had into consideration. The critical CTI value (CTIc) "represents the intensity of concentrated overland flow necessary to initiate erosion and channelised flow under a given set of circumstances" (Parker et al., 2007). AnnAGNPS (Annualized Agriculture Non-Point Source) pollution model is an important management tool developed by (USDA) and uses CTI to locate potential ephemeral gullies. Then, and depending on rainfall characteristics of the period simulated by AnnAGNPS, potential e. g. can become "actual", and be simulated by the model accordingly. This paper presents preliminary results and a number of considerations after evaluating the CTI tool in Navarre. CTIc values found are similar to those cited by other authors, and the e. g. networks that on average occur in the area have been located reasonably well. After our experience we believe that it is necessary to distinguish between the CTIc corresponding to the location of headcuts whose migrations originate the e. g. (CTIc1); and the CTIc necessary to represent the location of the gully networks in the watershed (CTIc2), where gully headcuts are located in the upstream end of the gullies. Most scientists only consider one CTIc value, although, from our point of view, the two situations are different. CTIc1 would represent the

  6. Developing Critical and Creative Thinkers: Toward a Conceptual Model of Creative and Critical Thinking Processes

    Science.gov (United States)

    Combs, Liesl Baum; Cennamo, Katherine S.; Newbill, Phyllis Leary

    2009-01-01

    Critical and creative thinking skills are essential for students who plan to work and excel in the 21st-century workforce. This goal of the project reported in this article was to define critical and creative thinking in a way that would be useful for classroom teachers charged with developing such skills in their students. To accomplish their…

  7. Incorporating Retention Time to Refine Models Predicting Thermal Regimes of Stream Networks Across New England

    Science.gov (United States)

    Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...

  8. Risk-Based Predictive Maintenance for Safety-Critical Systems by Using Probabilistic Inference

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2013-01-01

    Full Text Available Risk-based maintenance (RBM aims to improve maintenance planning and decision making by reducing the probability and consequences of failure of equipment. A new predictive maintenance strategy that integrates dynamic evolution model and risk assessment is proposed which can be used to calculate the optimal maintenance time with minimal cost and safety constraints. The dynamic evolution model provides qualified risks by using probabilistic inference with bucket elimination and gives the prospective degradation trend of a complex system. Based on the degradation trend, an optimal maintenance time can be determined by minimizing the expected maintenance cost per time unit. The effectiveness of the proposed method is validated and demonstrated by a collision accident of high-speed trains with obstacles in the presence of safety and cost constrains.

  9. Uncertainty modelling of critical column buckling for reinforced concrete buildings

    Indian Academy of Sciences (India)

    Kasim A Korkmaz; Fuat Demir; Hamide Tekeli

    2011-04-01

    Buckling is a critical issue for structural stability in structural design. In most of the buckling analyses, applied loads, structural and material properties are considered certain. However, in reality, these parameters are uncertain. Therefore, a prognostic solution is necessary and uncertainties have to be considered. Fuzzy logic algorithms can be a solution to generate more dependable results. This study investigates the material uncertainties on column design and proposes an uncertainty model for critical column buckling reinforced concrete buildings. Fuzzy logic algorithm was employed in the study. Lower and upper bounds of elastic modulus representing material properties were defined to take uncertainties into account. The results show that uncertainties play an important role in stability analyses and should be considered in the design. The proposed approach is applicable to both future numerical and experimental researches. According to the study results, it is seen that, calculated buckling load values are stayed in lower and upper bounds while the load values are different for same concrete strength values by using different code formula.

  10. The Critical Point Entanglement and Chaos in the Dicke Model

    Directory of Open Access Journals (Sweden)

    Lina Bao

    2015-07-01

    Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.

  11. Avalanches in self-organized critical neural networks: a minimal model for the neural SOC universality class.

    Science.gov (United States)

    Rybarsch, Matthias; Bornholdt, Stefan

    2014-01-01

    The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. Thus the model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that may include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.

  12. Avalanches in self-organized critical neural networks: a minimal model for the neural SOC universality class.

    Directory of Open Access Journals (Sweden)

    Matthias Rybarsch

    Full Text Available The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. Thus the model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that may include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.

  13. Childhood Maltreatment, Shame-Proneness and Self-Criticism in Social Anxiety Disorder: A Sequential Mediational Model.

    Science.gov (United States)

    Shahar, Ben; Doron, Guy; Szepsenwol, Ohad

    2015-01-01

    Previous research has shown a robust link between emotional abuse and neglect with social anxiety symptoms. However, the mechanisms through which these links operate are less clear. We hypothesized a model in which early experiences of abuse and neglect create aversive shame states, internalized into a stable shame-based cognitive-affective schema. Self-criticism is conceptualized as a safety strategy designed to conceal flaws and prevent further experiences of shame. However, self-criticism maintains negative self-perceptions and insecurity in social situations. To provide preliminary, cross-sectional support for this model, a nonclinical community sample of 219 adults from Israel (110 females, mean age = 38.7) completed measures of childhood trauma, shame-proneness, self-criticism and social anxiety symptoms. A sequential mediational model showed that emotional abuse, but not emotional neglect, predicted shame-proneness, which in turn predicted self-criticism, which in turn predicted social anxiety symptoms. These results provide initial evidence supporting the role of shame and self-criticism in the development and maintenance of social anxiety disorder. The clinical implications of these findings are discussed. Previous research has shown that histories of emotional abuse and emotional neglect predict social anxiety symptoms, but the mechanisms that underlie these associations are not clear. Using psycho-evolutionary and emotion-focused perspectives, the findings of the current study suggest that shame and self-criticism play an important role in social anxiety and may mediate the link between emotional abuse and symptoms. These findings also suggest that therapeutic interventions specifically targeting shame and self-criticism should be incorporated into treatments for social anxiety, especially with socially anxious patients with abuse histories. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... indicator simulation, we produce many realizations of model structure that are consistent with observed datasets and prior knowledge. Given estimates of model structural uncertainty, we incorporate hydrologic observations to evaluate the errors in hydrologic parameter or prediction errors that occur when...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...

  15. Critical Casimir force scaling functions of the two-dimensional Ising model for various boundary conditions

    CERN Document Server

    Hobrecht, Hendrik

    2016-01-01

    We present a systematic method to calculate the scaling functions for the critical Casimir force and the according potential of the two-dimensional Ising model with various boundary conditions. Therefore we start with the dimer representation of the corresponding partition function $Z$ on an $L\\times M$ square lattice, wrapped around a torus with aspect ratio $\\rho=L/M$. By assuming periodic boundary conditions and translational invariance in at least one direction, we systematically reduce the problem to a $2\\times2$ transfer matrix representation. For the torus we first reproduce the results by Kaufman and then give a detailed calculation of the scaling functions. Afterwards we present the calculation for the cylinder with open boundary conditions. All scaling functions are given in form of combinations of infinite products and integrals. Our results reproduce the known scaling functions in the limit of thin films $\\rho\\to 0$. Additionally, for the cylinder at criticality our result confirms the predictions...

  16. A Simple Model for Identifying Critical Structures in Atrial Fibrillation

    CERN Document Server

    Christensen, Kim; Peters, Nicholas S

    2014-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wavefront propagation on a structure mimicking the branching network architecture of heart muscle and show how AF emerges spontaneously as age-related parameters change. We identify regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia. This analytical result allows us to locate the transition in parameter space and highlights that the transition from regular to fibrillatory behaviour is a finite-size effect present in systems of any size. These clinically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  17. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  18. Self-organized criticality in a computer network model

    Science.gov (United States)

    Yuan; Ren; Shan

    2000-02-01

    We study the collective behavior of computer network nodes by using a cellular automaton model. The results show that when the load of network is constant, the throughputs and buffer contents of nodes are power-law distributed in both space and time. Also the feature of 1/f noise appears in the power spectrum of the change of the number of nodes that bear a fixed part of the system load. It can be seen as yet another example of self-organized criticality. Power-law decay in the distribution of buffer contents implies that heavy network congestion occurs with small probability. The temporal power-law distribution for throughput might be a reasonable explanation for the observed self-similarity in computer network traffic.

  19. Critical properties of a dilute O(n) model on the kagome lattice

    NARCIS (Netherlands)

    Li, B.; Guo, W.; Blöte, H.W.J.

    2008-01-01

    A critical dilute O(n) model on the kagome lattice is investigated analytically and numerically. We employ a number of exact equivalences which, in a few steps, link the critical O(n) spin model on the kagome lattice to the exactly solvable critical q-state Potts model on the honeycomb lattice with

  20. Critical, statistical, and thermodynamical properties of lattice models

    Energy Technology Data Exchange (ETDEWEB)

    Varma, Vipin Kerala

    2013-10-15

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  1. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  2. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  3. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  4. A Multiscale Modeling System: Developments, Applications, and Critical Issues

    Science.gov (United States)

    Tao, Wei-Kuo; Lau, William; Simpson, Joanne; Chern, Jiun-Dar; Atlas, Robert; Khairoutdinov, David Randall Marat; Li, Jui-Lin; Waliser, Duane E.; Jiang, Jonathan; Hou, Arthur; Lin, Xin; Peters-Lidard, Christa

    2009-01-01

    The foremost challenge in parameterizing convective clouds and cloud systems in large-scale models are the many coupled dynamical and physical processes that interact over a wide range of scales, from microphysical scales to the synoptic and planetary scales. This makes the comprehension and representation of convective clouds and cloud systems one of the most complex scientific problems in Earth science. During the past decade, the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) has pioneered the use of single-column models (SCMs) and cloud-resolving models (CRMs) for the evaluation of the cloud and radiation parameterizations in general circulation models (GCMs; e.g., GEWEX Cloud System Science Team 1993). These activities have uncovered many systematic biases in the radiation, cloud and convection parameterizations of GCMs and have led to the development of new schemes (e.g., Zhang 2002; Pincus et al, 2003; Zhang and Wu 2003; Wu et al. 2003; Liang and Wu 2005; Wu and Liang 2005, and others). Comparisons between SCMs and CRMs using the same large-scale forcing derived from field campaigns have demonstrated that CRMs are superior to SCMs in the prediction of temperature and moisture tendencies (e.g., Das et al. 1999; Randall et al 2003b; Xie et al. 2005).

  5. A Multiscale Modeling System: Developments, Applications, and Critical Issues

    Science.gov (United States)

    Tao, Wei-Kuo; Lau, William; Simpson, Joanne; Chern, Jiun-Dar; Atlas, Robert; Khairoutdinov, David Randall Marat; Li, Jui-Lin; Waliser, Duane E.; Jiang, Jonathan; Hou, Arthur; hide

    2009-01-01

    The foremost challenge in parameterizing convective clouds and cloud systems in large-scale models are the many coupled dynamical and physical processes that interact over a wide range of scales, from microphysical scales to the synoptic and planetary scales. This makes the comprehension and representation of convective clouds and cloud systems one of the most complex scientific problems in Earth science. During the past decade, the Global Energy and Water Cycle Experiment (GEWEX) Cloud System Study (GCSS) has pioneered the use of single-column models (SCMs) and cloud-resolving models (CRMs) for the evaluation of the cloud and radiation parameterizations in general circulation models (GCMs; e.g., GEWEX Cloud System Science Team 1993). These activities have uncovered many systematic biases in the radiation, cloud and convection parameterizations of GCMs and have led to the development of new schemes (e.g., Zhang 2002; Pincus et al, 2003; Zhang and Wu 2003; Wu et al. 2003; Liang and Wu 2005; Wu and Liang 2005, and others). Comparisons between SCMs and CRMs using the same large-scale forcing derived from field campaigns have demonstrated that CRMs are superior to SCMs in the prediction of temperature and moisture tendencies (e.g., Das et al. 1999; Randall et al 2003b; Xie et al. 2005).

  6. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  7. Predicting Critical Thinking Skills of University Students through Metacognitive Self-Regulation Skills and Chemistry Self-Efficacy

    Science.gov (United States)

    Uzuntiryaki-Kondakci, Esen; Capa-Aydin, Yesim

    2013-01-01

    This study aimed at examining the extent to which metacognitive self-regulation and chemistry self-efficacy predicted critical thinking. Three hundred sixty-five university students participated in the study. Data were collected using appropriate dimensions of Motivated Strategies for Learning Questionnaire and College Chemistry Self-Efficacy…

  8. Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Michael S [Los Alamos National Laboratory; Higdon, David M [Los Alamos National Laboratory

    2009-01-01

    In this paper, we present a generic example to illustrate various points about making future predictions of population performance using a biased performance computer code, physical performance data, and critical performance parameter data sampled from the population at various times. We show how the actual performance data help to correct the biased computer code and the impact of uncertainty especially when the prediction is made far from where the available data are taken. We also demonstrate how a Bayesian approach allows both inferences about the unknown parameters and predictions to be made in a consistent framework.

  9. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  10. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  11. Model-based mask verification on critical 45nm logic masks

    Science.gov (United States)

    Sundermann, F.; Foussadier, F.; Takigawa, T.; Wiley, J.; Vacca, A.; Depre, L.; Chen, G.; Bai, S.; Wang, J.-S.; Howell, R.; Arnoux, V.; Hayano, K.; Narukawa, S.; Kawashima, S.; Mohri, H.; Hayashi, N.; Miyashita, H.; Trouiller, Y.; Robert, F.; Vautrin, F.; Kerrien, G.; Planchot, J.; Martinelli, C.; Di-Maria, J. L.; Farys, V.; Vandewalle, B.; Perraud, L.; Le Denmat, J. C.; Villaret, A.; Gardin, C.; Yesilada, E.; Saied, M.

    2008-05-01

    In the continuous battle to improve critical dimension (CD) uniformity, especially for 45-nanometer (nm) logic advanced products, one important recent advance is the ability to accurately predict the mask CD uniformity contribution to the overall global wafer CD error budget. In most wafer process simulation models, mask error contribution is embedded in the optical and/or resist models. We have separated the mask effects, however, by creating a short-range mask process model (MPM) for each unique mask process and a long-range CD uniformity mask bias map (MBM) for each individual mask. By establishing a mask bias map, we are able to incorporate the mask CD uniformity signature into our modelling simulations and measure the effects on global wafer CD uniformity and hotspots. We also have examined several ways of proving the efficiency of this approach, including the analysis of OPC hot spot signatures with and without the mask bias map (see Figure 1) and by comparing the precision of the model contour prediction to wafer SEM images. In this paper we will show the different steps of mask bias map generation and use for advanced 45nm logic node layers, along with the current results of this new dynamic application to improve hot spot verification through Brion Technologies' model-based mask verification loop.

  12. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  13. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  14. Predictive power of the Braden scale for pressure sore risk in adult critical care patients: a comprehensive review.

    Science.gov (United States)

    Cox, Jill

    2012-01-01

    Critical care is designed for managing the sickest patients within our healthcare system. Multiple factors associated with an increased likelihood of pressure ulcer development have been investigated in the critical care population. Nevertheless, there is a lack of consensus regarding which of these factors poses the greatest risk for pressure ulceration. While the Braden scale for pressure sore risk is the most commonly used tool for measuring pressure ulcer risk in the United States, research focusing on the cumulative Braden Scale score and subscale scores is lacking in the critical care population. This author conducted a literature review on pressure ulcer risk assessment in the critical care population, to include the predictive value of both the total score and the subscale scores. In this review, the subscales sensory perception, mobility, moisture, and friction/shear were found to be associated with an increased likelihood of pressure ulcer development; in contrast, the Activity and Nutrition subscales were not found to predict pressure ulcer development in this population. In order to more precisely quantify risk in the critically ill population, modification of the Braden scale or development of a critical care specific risk assessment tool may be indicated.

  15. Predicting critical temperatures of iron(II) spin crossover materials: density functional theory plus U approach.

    Science.gov (United States)

    Zhang, Yachao

    2014-12-07

    A first-principles study of critical temperatures (T(c)) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T(c) of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE(HL) and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T(c) by exploiting the ΔH/T - T and ΔS - T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T(c) of the two phases. This study shows the applicability of the DFT+U approach for predicting T(c) of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  16. Testing a model for the critical degree of saturation at freezing of porous building materials

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan De Place

    1996-01-01

    Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction of SCR based on fracture mechanics and phase...... geometry of two-phase materials has been developed. The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Calculations are based on porosity, pore size distribution, modulus...... of elasticity, tensile strength, amount of freezable water, thermal expansion coefficients and parameters characterizing the pore structure and its effect on strength, modulus of elasticity and volumetric expansion. For the present, the model assumes non air-entrained homogeneous materials subjected to freeze...

  17. Prediction of risk and incidence of dry eye in critical patients1

    Science.gov (United States)

    de Araújo, Diego Dias; Almeida, Natália Gherardi; Silva, Priscila Marinho Aleixo; Ribeiro, Nayara Souza; Werli-Alvarenga, Andreza; Chianca, Tânia Couto Machado

    2016-01-01

    Objectives: to estimate the incidence of dry eye, to identify risk factors and to establish a risk prediction model for its development in adult patients admitted to the intensive care unit of a public hospital. Method: concurrent cohort, conducted between March and June, 2014, with 230 patients admitted to an intensive care unit. Data were analyzed by bivariate descriptive statistics, with multivariate survival analysis and Cox regression. Results: 53% out of 230 patients have developed dry eye, with onset mean time of 3.5 days. Independent variables that significantly and concurrently impacted the time for dry eye to occur were: O2 in room air, blinking more than five times per minute (lower risk factors) and presence of vascular disease (higher risk factor). Conclusion: dry eye is a common finding in patients admitted to adults intensive care units, and care for its prevention should be established. PMID:27192415

  18. Critical behavior of the random-bond Ashkin-Teller model: A Monte Carlo study

    Science.gov (United States)

    Wiseman, Shai; Domany, Eytan

    1995-04-01

    The critical behavior of a bond-disordered Ashkin-Teller model on a square lattice is investigated by intensive Monte Carlo simulations. A duality transformation is used to locate a critical plane of the disordered model. This critical plane corresponds to the line of critical points of the pure model, along which critical exponents vary continuously. Along this line the scaling exponent corresponding to randomness φ=(α/ν) varies continuously and is positive so that the randomness is relevant, and different critical behavior is expected for the disordered model. We use a cluster algorithm for the Monte Carlo simulations based on the Wolff embedding idea, and perform a finite size scaling study of several critical models, extrapolating between the critical bond-disordered Ising and bond-disordered four-state Potts models. The critical behavior of the disordered model is compared with the critical behavior of an anisotropic Ashkin-Teller model, which is used as a reference pure model. We find no essential change in the order parameters' critical exponents with respect to those of the pure model. The divergence of the specific heat C is changed dramatically. Our results favor a logarithmic type divergence at Tc, C~lnL for the random-bond Ashkin-Teller and four-state Potts models and C~ln lnL for the random-bond Ising model.

  19. Critical Differences of Asymmetric Magnetic Reconnection from Standard Models

    Science.gov (United States)

    Nitta, S.; Wada, T.; Fuchida, T.; Kondoh, K.

    2016-09-01

    We have clarified the structure of asymmetric magnetic reconnection in detail as the result of the spontaneous evolutionary process. The asymmetry is imposed as ratio k of the magnetic field strength in both sides of the initial current sheet (CS) in the isothermal equilibrium. The MHD simulation is carried out by the HLLD code for the long-term temporal evolution with very high spatial resolution. The resultant structure is drastically different from the symmetric case (e.g., the Petschek model) even for slight asymmetry k = 2. (1) The velocity distribution in the reconnection jet clearly shows a two-layered structure, i.e., the high-speed sub-layer in which the flow is almost field aligned and the acceleration sub-layer. (2) Higher beta side (HBS) plasma is caught in a lower beta side plasmoid. This suggests a new plasma mixing process in the reconnection events. (3) A new large strong fast shock in front of the plasmoid forms in the HBS. This can be a new particle acceleration site in the reconnection system. These critical properties that have not been reported in previous works suggest that we contribute to a better and more detailed knowledge of the reconnection of the standard model for the symmetric magnetic reconnection system.

  20. A critical perspective on 1-D modeling of river processes : gravel load and aggradation in lower Fraser River.

    OpenAIRE

    R.; Ferguson; Church, M.

    2009-01-01

    We investigate how well a width-averaged morphodynamic model can simulate gravel transport and aggradation along a highly irregular 38-km reach of lower Fraser River and discuss critical issues in this type of modeling. Bed load equations with plausible parameter values predict a gravel input consistent with direct measurements and a sediment budget. Simulations using spatially varying channel width, and forced by dominant discharge or a 20-year hydrograph, match the observed downstream finin...

  1. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. Using a Model of Social Dynamics to Predict Popularity of News

    CERN Document Server

    Lerman, Kristina

    2010-01-01

    Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both companies that host social media sites and their users. Accurate and timely prediction would enable the companies to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions among content quality, how the social media site chooses to highlight content, and influence among users. While these factors make it difficult to predict popularity \\emph{a priori}, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating aspects of the web site design, such mode...

  4. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  5. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  6. Predição do desempenho aeróbio na canoagem a partir da aplicação de diferentes modelos matemáticos de velocidade crítica Aerobic performance prediction in canoeing from the application of different mathematical models of critical velocity

    Directory of Open Access Journals (Sweden)

    Fábio Yuzo Nakamura

    2008-10-01

    .03, while the correlation between Vcrit-3parameters and V6000m was not significant. The outcomes of this study suggest that the 2-parameter critical velocity model provides Vcrit values more suitable for the aerobic assessment of canoers. The Vcrit-3parameters underestimates the velocity which can be kept for approximately 30 min, with low aerobic performance predictive capacity. Thus, evidence on the validity of the 2-parameter original model critical velocity proposed by Monod and Scherrer was obtained.

  7. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  8. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  9. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  10. Improving Critical Thinking Skills of College Students through RMS Model for Learning Basic Concepts in Science

    Science.gov (United States)

    Muhlisin, Ahmad; Susilo, Herawati; Amin, Mohamad; Rohman, Fatchur

    2016-01-01

    The purposes of this study were to: 1) Examine the effect of RMS learning model towards critical thinking skills. 2) Examine the effect of different academic abilities against critical thinking skills. 3) Examine the effect of the interaction between RMS learning model and different academic abilities against critical thinking skills. The research…

  11. Improving Critical Thinking Skills of College Students through RMS Model for Learning Basic Concepts in Science

    Science.gov (United States)

    Muhlisin, Ahmad; Susilo, Herawati; Amin, Mohamad; Rohman, Fatchur

    2016-01-01

    The purposes of this study were to: 1) Examine the effect of RMS learning model towards critical thinking skills. 2) Examine the effect of different academic abilities against critical thinking skills. 3) Examine the effect of the interaction between RMS learning model and different academic abilities against critical thinking skills. The research…

  12. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  13. Error-likelihood prediction in the medial frontal cortex: A critical evaluation

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Scheizer, T.S.; Mars, R.B.; Botvinick, M.M.; Hajcal, G.

    2007-01-01

    A recent study has proposed that posterior regions of the medial frontal cortex (pMFC) learn to predict the likelihood of errors ccurring in a given task context. A key prediction of the errorlZelihood (EL) hypothesis is that the pMFC should exhibit enhanced activity to cues that are predictive of h

  14. Life Prediction of Atmospheric Plasma-Sprayed Thermal Barrier Coatings Using Temperature-Dependent Model Parameters

    Science.gov (United States)

    Zhang, B.; Chen, Kuiying; Baddour, N.; Patnaik, P. C.

    2017-06-01

    The failure analysis and life prediction of atmospheric plasma-sprayed thermal barrier coatings (APS-TBCs) were carried out for a thermal cyclic process. A residual stress model for the top coat of APS-TBC was proposed and then applied to life prediction. This residual stress model shows an inversion characteristic versus thickness of thermally grown oxide. The capability of the life model was demonstrated using temperature-dependent model parameters. Using existing life data, a comparison of fitting approaches of life model parameters was performed. A larger discrepancy was found for the life predicted using linearized fitting parameters versus temperature compared to those using non-linear fitting parameters. A method for integrating the residual stress was proposed by using the critical time of stress inversion. The role of the residual stresses distributed at each individual coating layer was explored and their interplay on the coating's delamination was analyzed.

  15. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  16. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  17. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  18. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  19. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  20. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies