WorldWideScience

Sample records for testing model predictions

  1. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  2. Test of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.; Bateman, G.; Boucher, D.

    2001-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  3. Tests of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.R.; Bateman, G.; Boucher, D.

    1999-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  4. Towards building a neural network model for predicting pile static load test curves

    Directory of Open Access Journals (Sweden)

    Alzo’ubi A. K.

    2018-01-01

    Full Text Available In the United Arab Emirates, Continuous Flight Auger piles are the most widely used type of deep foundation. To test the pile behaviour, the Static Load Test is routinely conducted in the field by increasing the dead load while monitoring the displacement. Although the test is reliable, it is expensive to conduct. This test is usually conducted in the UAE to verify the pile capacity and displacement as the load increase and decreases in two cycles. In this paper we will utilize the Artificial Neural Network approach to build a model that can predict a complete Static Load Pile test. We will show that by integrating the pile configuration, soil properties, and ground water table in one artificial neural network model, the Static Load Test can be predicted with confidence. We believe that based on this approach, the model is able to predict the entire pile load test from start to end. The suggested approach is an excellent tool to reduce the cost associated with such expensive tests or to predict pile’s performance ahead of the actual test.

  5. Testing Predictive Models of Technology Integration in Mexico and the United States

    Science.gov (United States)

    Velazquez, Cesareo Morales

    2008-01-01

    Data from Mexico City, Mexico (N = 978) and from Texas, USA (N = 932) were used to test the predictive validity of the teacher professional development component of the Will, Skill, Tool Model of Technology Integration in a cross-cultural context. Structural equation modeling (SEM) was used to test the model. Analyses of these data yielded…

  6. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  7. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  8. Should we assess climate model predictions in light of severe tests?

    Science.gov (United States)

    Katzav, Joel

    2011-06-01

    According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and nearfuture climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.

  9. PREDICTABILITY OF FINANCIAL CRISES: TESTING K.R.L. MODEL IN THE CASE OF TURKEY

    Directory of Open Access Journals (Sweden)

    Zeynep KARACOR

    2012-06-01

    Full Text Available The aim of this study is to test predictability of 2007 Global Economic Crisis which hit Turkey by the help of macroeconomic data of Turkey. K.R.L. model is used to test the predictability. By the method of analyzing various leading early warning indicators, the success of the model in forecasting the crises is surveyed. The findings do not support K.R.L. models. Possible reasons for this are stated at the article.

  10. Using the Integrative Model of Behavioral Prediction to Understand College Students' STI Testing Beliefs, Intentions, and Behaviors.

    Science.gov (United States)

    Wombacher, Kevin; Dai, Minhao; Matig, Jacob J; Harrington, Nancy Grant

    2018-03-22

    To identify salient behavioral determinants related to STI testing among college students by testing a model based on the integrative model of behavioral (IMBP) prediction. 265 undergraduate students from a large university in the Southeastern US. Formative and survey research to test an IMBP-based model that explores the relationships between determinants and STI testing intention and behavior. Results of path analyses supported a model in which attitudinal beliefs predicted intention and intention predicted behavior. Normative beliefs and behavioral control beliefs were not significant in the model; however, select individual normative and control beliefs were significantly correlated with intention and behavior. Attitudinal beliefs are the strongest predictor of STI testing intention and behavior. Future efforts to increase STI testing rates should identify and target salient attitudinal beliefs.

  11. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  12. Testing In College Admissions: An Alternative to the Traditional Predictive Model.

    Science.gov (United States)

    Lunneborg, Clifford E.

    1982-01-01

    A decision-making or utility theory model (which deals effectively with affirmative action goals and allows standardized tests to be placed in the service of those goals) is discussed as an alternative to traditional predictive admissions. (Author/PN)

  13. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  14. Testing the Predictive Validity of the Hendrich II Fall Risk Model.

    Science.gov (United States)

    Jung, Hyesil; Park, Hyeoun-Ae

    2018-03-01

    Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.

  15. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  16. Probabilistic Modeling of Updating Epistemic Uncertainty In Pile Capacity Prediction With a Single Failure Test Result

    Directory of Open Access Journals (Sweden)

    Indra Djati Sidi

    2017-12-01

    Full Text Available The model error N has been introduced to denote the discrepancy between measured and predicted capacity of pile foundation. This model error is recognized as epistemic uncertainty in pile capacity prediction. The statistics of N have been evaluated based on data gathered from various sites and may be considered only as a eneral-error trend in capacity prediction, providing crude estimates of the model error in the absence of more specific data from the site. The results of even a single load test to failure, should provide direct evidence of the pile capacity at a given site. Bayes theorem has been used as a rational basis for combining new data with previous data to revise assessment of uncertainty and reliability. This study is devoted to the development of procedures for updating model error (N, and subsequently the predicted pile capacity with a results of single failure test.

  17. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  18. A model to predict multivessel coronary artery disease from the exercise thallium-201 stress test

    International Nuclear Information System (INIS)

    Pollock, S.G.; Abbott, R.D.; Boucher, C.A.; Watson, D.D.; Kaul, S.

    1991-01-01

    The aim of this study was to (1) determine whether nonimaging variables add to the diagnostic information available from exercise thallium-201 images for the detection of multivessel coronary artery disease; and (2) to develop a model based on the exercise thallium-201 stress test to predict the presence of multivessel disease. The study populations included 383 patients referred to the University of Virginia and 325 patients referred to the Massachusetts General Hospital for evaluation of chest pain. All patients underwent both cardiac catheterization and exercise thallium-201 stress testing between 1978 and 1981. In the University of Virginia cohort, at each level of thallium-201 abnormality (no defects, one defect, more than one defect), ST depression and patient age added significantly in the detection of multivessel disease. Logistic regression analysis using data from these patients identified three independent predictors of multivessel disease: initial thallium-201 defects, ST depression, and age. A model was developed to predict multivessel disease based on these variables. As might be expected, the risk of multivessel disease predicted by the model was similar to that actually observed in the University of Virginia population. More importantly, however, the model was accurate in predicting the occurrence of multivessel disease in the unrelated population studied at the Massachusetts General Hospital. It is, therefore, concluded that (1) nonimaging variables (age and exercise-induced ST depression) add independent information to thallium-201 imaging data in the detection of multivessel disease; and (2) a model has been developed based on the exercise thallium-201 stress test that can accurately predict the probability of multivessel disease in other populations

  19. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    A number of software reliability models have been developed to estimate and to predict software reliability. However, there are no established standard models to quantify software reliability. Most models estimate the quality of software in reliability figures such as remaining faults, failure rate, or mean time to next failure at the testing phase, and they consider them ultimate indicators of software reliability. Experience shows that there is a large gap between predicted reliability during development and reliability measured during operation, which means that predicted reliability, or so-called test reliability, is not operational reliability. Customers prefer operational reliability to test reliability. In this study, we propose a method that predicts operational reliability rather than test reliability by introducing the testing environment factor that quantifies the changes in environments

  20. Diagnostic test of predicted height model in Indonesian elderly: a study in an urban area

    Directory of Open Access Journals (Sweden)

    Fatmah Fatmah

    2010-08-01

    Full Text Available Aim In an anthropometric assessment, elderly are frequently unable to measure their height due to mobility and skeletal deformities. An alternative is to use a surrogate value of stature from arm span, knee height, and sitting height. The equations developed for predicting height in Indonesian elderly using these three predictors. The equations put in the nutritional assessment card (NSA of older people. Before the card which is the first new technology in Indonesia will be applied in the community, it should be tested. The study aimed was to conduct diagnostic test of predicted height model in the card compared to actual height.Methods Model validation towards 400 healthy elderly conducted in Jakarta City with cross-sectional design. The study was the second validation test of the model besides Depok City representing semi urban area which was undertaken as the first study.Result Male elderly had higher mean age, height, weight, arm span, knee height, and sitting height as compared to female elderly. The highest correlation between knee height and standing height was similar in women (r = 0.80; P < 0.001 and men (r = 0.78; P < 0.001, and followed by arm span and sitting height. Knee height had the lowest difference with standing height in men (3.13 cm and women (2.79 cm. Knee height had the biggest sensitivity (92.2%, and the highest specificity on sitting height (91.2%.Conclusion Stature prediction equation based on knee-height, arm span, and sitting height are applicable for nutritional status assessment in Indonesian elderly. (Med J Indones 2010;19:199-204Key words: diagnostic test, elderly, predicted height model

  1. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  2. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  3. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  4. PREDICTIONS OF DISPERSION AND DEPOSITION OF FALLOUT FROM NUCLEAR TESTING USING THE NOAA-HYSPLIT METEOROLOGICAL MODEL

    Science.gov (United States)

    Moroz, Brian E.; Beck, Harold L.; Bouville, André; Simon, Steven L.

    2013-01-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly, where little historical fallout monitoring data is available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particles sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of

  5. Predictions of dispersion and deposition of fallout from nuclear testing using the NOAA-HYSPLIT meteorological model.

    Science.gov (United States)

    Moroz, Brian E; Beck, Harold L; Bouville, André; Simon, Steven L

    2010-08-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly where little historical fallout monitoring data are available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particle sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that, when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of

  6. Life prediction of OLED for constant-stress accelerated degradation tests using luminance decaying model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jianping, E-mail: jpzhanglzu@163.com [College of Energy and Mechanical Engineering, Shanghai University of Electric Power, Shanghai 200090 (China); Li, Wenbin [College of Energy and Mechanical Engineering, Shanghai University of Electric Power, Shanghai 200090 (China); Cheng, Guoliang; Chen, Xiao [Shanghai Tianyi Electric Co., Ltd., Shanghai 201611 (China); Wu, Helen [School of Computing, Engineering and Mathematics, University of Western Sydney, Sydney 2751 (Australia); Herman Shen, M.-H. [Department of Mechanical and Aerospace Engineering, The Ohio State University, OH 43210 (United States)

    2014-10-15

    In order to acquire the life information of organic light emitting diode (OLED), three groups of constant stress accelerated degradation tests are performed to obtain the luminance decaying data of samples under the condition that the luminance and the current are respectively selected as the indicator of performance degradation and the test stress. Weibull function is applied to describe the relationship between luminance decaying and time, least square method (LSM) is employed to calculate the shape parameter and scale parameter, and the life prediction of OLED is achieved. The numerical results indicate that the accelerated degradation test and the luminance decaying model reveal the luminance decaying law of OLED. The luminance decaying formula fits the test data very well, and the average error of fitting value compared with the test data is small. Furthermore, the accuracy of the OLED life predicted by luminance decaying model is high, which enable rapid estimation of OLED life and provide significant guidelines to help engineers make decisions in design and manufacturing strategy from the aspect of reliability life. - Highlights: • We gain luminance decaying data by accelerated degradation tests on OLED. • The luminance decaying model objectively reveals the decaying law of OLED luminance. • The least square method (LSM) is employed to calculate Weibull parameters. • The plan designed for accelerated degradation tests proves to be feasible. • The accuracy of the OLED life and the luminance decaying fitting formula is high.

  7. Life prediction of OLED for constant-stress accelerated degradation tests using luminance decaying model

    International Nuclear Information System (INIS)

    Zhang, Jianping; Li, Wenbin; Cheng, Guoliang; Chen, Xiao; Wu, Helen; Herman Shen, M.-H.

    2014-01-01

    In order to acquire the life information of organic light emitting diode (OLED), three groups of constant stress accelerated degradation tests are performed to obtain the luminance decaying data of samples under the condition that the luminance and the current are respectively selected as the indicator of performance degradation and the test stress. Weibull function is applied to describe the relationship between luminance decaying and time, least square method (LSM) is employed to calculate the shape parameter and scale parameter, and the life prediction of OLED is achieved. The numerical results indicate that the accelerated degradation test and the luminance decaying model reveal the luminance decaying law of OLED. The luminance decaying formula fits the test data very well, and the average error of fitting value compared with the test data is small. Furthermore, the accuracy of the OLED life predicted by luminance decaying model is high, which enable rapid estimation of OLED life and provide significant guidelines to help engineers make decisions in design and manufacturing strategy from the aspect of reliability life. - Highlights: • We gain luminance decaying data by accelerated degradation tests on OLED. • The luminance decaying model objectively reveals the decaying law of OLED luminance. • The least square method (LSM) is employed to calculate Weibull parameters. • The plan designed for accelerated degradation tests proves to be feasible. • The accuracy of the OLED life and the luminance decaying fitting formula is high

  8. Application of the MIT two-channel model to predict flow recirculation in WARD 61-pin blanket tests

    International Nuclear Information System (INIS)

    Huang, T.T.; Todreas, N.E.

    1983-01-01

    The preliminary application of MIT two-channel model to WARD sodium blanket tests was presented in this report. The criterion was employed to predict the recirculation for selected completed (transient and steady state) and proposed (transient only) tests. The heat loss was correlated from the results of the WARD zero power tests. The calculational results show that the criterion agrees with the WARD tests except for WARD RUN 718 for which the criterion predicts a different result from WARD data under bundle heat loss condition. However, if the test assembly is adiabatic, the calculations predict an operating point which is marginally close to the mixed-to-recirculation transition regime

  9. Application of the MIT two-channel model to predict flow recirculation in WARD 61-pin blanket tests

    International Nuclear Information System (INIS)

    Huang, T.T.; Todreas, N.E.

    1983-01-01

    The preliminary application of MIT TWO-CHANNEL MODEL to WARD sodium blanket tests was presented in this report. Our criterion was employed to predict the recirculation for selected completed (transient and steady state) and proposed (transient only) tests. The heat loss was correlated from the results of the WARD zero power tests. The calculational results show that our criterion agrees with the WARD tests except for WARD RUN 718 for which the criterion predicts a different result from WARD data under bundle heat loss condition. However, if the test assembly is adiabatic, the calculations predict an operating point which is marginally close to the mixed-to-recirculation transition regime

  10. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  11. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  12. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test.

    Science.gov (United States)

    Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M

    2017-11-01

    To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. A method for testing whether model predictions fall within a prescribed factor of true values, with an application to pesticide leaching

    Science.gov (United States)

    Parrish, Rudolph S.; Smith, Charles N.

    1990-01-01

    A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.

  14. [Testing a Model to Predict Problem Gambling in Speculative Game Users].

    Science.gov (United States)

    Park, Hyangjin; Kim, Suk Sun

    2018-04-01

    The purpose of the study was to develop and test a model for predicting problem gambling in speculative game users based on Blaszczynski and Nower's pathways model of problem and pathological gambling. The participants were 262 speculative game users recruited from seven speculative gambling places located in Seoul, Gangwon, and Gyeonggi, Korea. They completed a structured self-report questionnaire comprising measures of problem gambling, negative emotions, attentional impulsivity, motor impulsivity, non-planning impulsivity, gambler's fallacy, and gambling self-efficacy. Structural Equation Modeling was used to test the hypothesized model and to examine the direct and indirect effects on problem gambling in speculative game users using SPSS 22.0 and AMOS 20.0 programs. The hypothetical research model provided a reasonable fit to the data. Negative emotions, motor impulsivity, gambler's fallacy, and gambling self-efficacy had direct effects on problem gambling in speculative game users, while indirect effects were reported for negative emotions, motor impulsivity, and gambler's fallacy. These predictors explained 75.2% problem gambling in speculative game users. The findings suggest that developing intervention programs to reduce negative emotions, motor impulsivity, and gambler's fallacy, and to increase gambling self-efficacy in speculative game users are needed to prevent their problem gambling. © 2018 Korean Society of Nursing Science.

  15. Predicting cyberbullying perpetration in emerging adults: A theoretical test of the Barlett Gentile Cyberbullying Model.

    Science.gov (United States)

    Barlett, Christopher; Chamberlin, Kristina; Witkower, Zachary

    2017-04-01

    The Barlett and Gentile Cyberbullying Model (BGCM) is a learning-based theory that posits the importance of positive cyberbullying attitudes predicting subsequent cyberbullying perpetration. Furthermore, the tenants of the BGCM state that cyberbullying attitude are likely to form when the online aggressor believes that the online environment allows individuals of all physical sizes to harm others and they are perceived as anonymous. Past work has tested parts of the BGCM; no study has used longitudinal methods to examine this model fully. The current study (N = 161) employed a three-wave longitudinal design to test the BGCM. Participants (age range: 18-24) completed measures of the belief that physical strength is irrelevant online and anonymity perceptions at Wave 1, cyberbullying attitudes at Wave 2, and cyberbullying perpetration at Wave 3. Results showed strong support for the BGCM: anonymity perceptions and the belief that physical attributes are irrelevant online at Wave 1 predicted Wave 2 cyberbullying attitudes, which predicted subsequent Wave 3 cyberbullying perpetration. These results support the BGCM and are the first to show empirical support for this model. Aggr. Behav. 43:147-154, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. A human hemi-cornea model for eye irritation testing: quality control of production, reliability and predictive capacity.

    Science.gov (United States)

    Engelke, M; Zorn-Kruppa, M; Gabel, D; Reisinger, K; Rusche, B; Mewes, K R

    2013-02-01

    We have developed a 3-dimensional human hemi-cornea which comprises an immortalized epithelial cell line and keratocytes embedded in a collagen stroma. In the present study, we have used MTT reduction of the whole tissue to clarify whether the production of this complex 3-D-model is transferable into other laboratories and whether these tissues can be constructed reproducibly. Our results demonstrate the reproducible production of the hemi-cornea model according to standard operation procedures using 15 independent batches of reconstructed hemi-cornea models in two independent laboratories each. Furthermore, the hemi-cornea tissues have been treated with 20 chemicals of different eye-irritating potential under blind conditions to assess the performance and limitations of our test system comparing three different prediction models. The most suitable prediction model revealed an overall in vitro-in vivo concordance of 80% and 70% in the participating laboratories, respectively, and an inter-laboratory concordance of 80%. Sensitivity of the test was 77% and specificity was between 57% and 86% to discriminate classified from non-classified chemicals. We conclude that additional physiologically relevant endpoints in both epithelium and stroma have to be developed for the reliable prediction of all GHS classes of eye irritation in one stand alone test system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  18. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  19. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  20. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  1. Testing the Predictions of the Central Capacity Sharing Model

    Science.gov (United States)

    Tombu, Michael; Jolicoeur, Pierre

    2005-01-01

    The divergent predictions of 2 models of dual-task performance are investigated. The central bottleneck and central capacity sharing models argue that a central stage of information processing is capacity limited, whereas stages before and after are capacity free. The models disagree about the nature of this central capacity limitation. The…

  2. Prediction of skull fracture risk for children 0-9 months old through validated parametric finite element model and cadaver test reconstruction.

    Science.gov (United States)

    Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen

    2015-09-01

    Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.

  3. The use of Chernobyl data to test model predictions for interindividual variability of 137Cs concentrations in humans

    International Nuclear Information System (INIS)

    Hoffman, F. Owen; Thiessen, Kathleen M.

    1996-01-01

    Data sets assembled in the aftermath of the Chernobyl accident as a part of the International Atomic Energy Agency's model testing program (VAMP) have provided a rare opportunity for 'blind-testing' predictions made with exposure assessment models. Measurements of Chernobyl-derived 137 Cs in Central Bohemia (Czech Republic) and southern Finland were used to test model predictions for a number of endpoints, including the distribution of whole-body concentrations of 137 Cs in adults in these regions at specified time points. This test endpoint required separation of uncertainty due to stochastic variability (aleatoric uncertainty) and uncertainty due to lack of knowledge about fixed but unknown values (epistemic uncertainty). Predictions of the distribution of whole-body 137 Cs concentrations were made by a minority of the participants in these model-testing exercises. Major reasons for misprediction included bias in the bioavailability of 137 Cs in soil and misestimation of the total intake of 137 Cs in the diet. Overestimation of the amount of interindividual variability often resulted from confusion of uncertainty with variability. The spreads of the distributions for parameters describing interindividual variability were frequently increased to compensate for lack of knowledge about the uptake and metabolism of 137 Cs in the population. Accurate results produced by participants are attributable both to a participant's access to additional site-specific data or choice of appropriate site-specific assumptions and to the effects of compensatory errors

  4. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  5. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  6. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  7. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    Science.gov (United States)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  8. Propensity Customer the Proposition of Lawsuits: Development and test of Predictive Model for the Electricity Sector

    Directory of Open Access Journals (Sweden)

    Victor Manoel Cunha de Almeida

    2014-11-01

    Full Text Available The study aimed to propose and test a model to predict the propensity of lawsuits of Power sector utility customers. The effects of customer profile, motives of complaints, and the history of administrative actions, on the propensity to lawsuits, were investigated. The paradigm of disconfirmation of expectations was used as a theoretical framework for this study. We adopted a substantive approach to the development and testing of the predictive model. The technique of Classification Tree was chosen to operationalize the model. The method specified in this study for the creation of the decision tree was the CHAID (Chi Square Interaction Detector. Data analysis shows that the propensity to the proposition of a lawsuit does not solely depend on the nature of the problem faced by the client, but the profile and trajectory relationship with the utility provider. The results of this study offer empirical support to the theoretical paradigm of disconfirmation of expectations, more specifically, with regard to the Satisfaction Theory, Attribution Theory and the Theory of Justice and Equity. The main managerial contribution of the study lies in propose a predictive model that allows utility providers to assign each customer a probability to propose a lawsuit, which enables the proactive adoption of practices by the managers, aiming to better serve the public.

  9. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview

    Science.gov (United States)

    Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander

    2017-05-01

    The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.

  10. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  11. Predicting bending strength of fire-retardant-treated plywood from screw-withdrawal tests

    Science.gov (United States)

    J. E. Winandy; P. K. Lebow; W. Nelson

    This report describes the development of a test method and predictive model to estimate the residual bending strength of fire-retardant-treated plywood roof sheathing from measurement of screw-withdrawal force. The preferred test methodology is described in detail. Models were developed to predict loss in mean and lower prediction bounds for plywood bending strength as...

  12. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  13. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  14. Prediction of EMP cavitation threshold from other than sodium testing

    International Nuclear Information System (INIS)

    Kambe, M.; Kamei, M.

    2002-01-01

    An experimental study has been performed to predict the cavitation threshold of electromagnetic pumps from measurements on test models using water and alcohol. Cavitation tests were carried out using water and alcohol test loop on subscale ducts of transparent acrylic resin with reference to an actual pump (1.1m 3 /min). These data were compared to those obtained from the in-sodium tests on the actual pump. The investigation revealed that the value of Thoma's dimensionless parameter: σ applied to the test model for water and alcohol is quite higher than that of corresponding σ on the actual pump. To minimize the incipient cavitation safety margin, more accurate prediction must be required. In view of this, the authors proposed the dimensionless parameter: σ T =σ/W-bare where W-bare denotes the Weber number. This parameter was confirmed to predict the cavitation threshold of electromagnetic pumps with much more accuracy than ever before. It can also be adopted to predict cavitation threshold of other FBR components. (author)

  15. Testing a 1-D Analytical Salt Intrusion Model and the Predictive Equation in Malaysian Estuaries

    Science.gov (United States)

    Gisen, Jacqueline Isabella; Savenije, Hubert H. G.

    2013-04-01

    Little is known about the salt intrusion behaviour in Malaysian estuaries. Study on this topic sometimes requires large amounts of data especially if a 2-D or 3-D numerical models are used for analysis. In poor data environments, 1-D analytical models are more appropriate. For this reason, a fully analytical 1-D salt intrusion model, based on the theory of Savenije in 2005, was tested in three Malaysian estuaries (Bernam, Selangor and Muar) because it is simple and requires minimal data. In order to achieve that, site surveys were conducted in these estuaries during the dry season (June-August) at spring tide by moving boat technique. Data of cross-sections, water levels and salinity were collected, and then analysed with the salt intrusion model. This paper demonstrates a good fit between the simulated and observed salinity distribution for all three estuaries. Additionally, the calibrated Van der Burgh's coefficient K, Dispersion coefficient D0, and salt intrusion length L, for the estuaries also displayed a reasonable correlations with those calculated from the predictive equations. This indicates that not only is the salt intrusion model valid for the case studies in Malaysia but also the predictive model. Furthermore, the results from this study describe the current state of the estuaries with which the Malaysian water authority in Malaysia can make decisions on limiting water abstraction or dredging. Keywords: salt intrusion, Malaysian estuaries, discharge, predictive model, dispersion

  16. Shelf-Life Prediction of Extra Virgin Olive Oils Using an Empirical Model Based on Standard Quality Tests

    Directory of Open Access Journals (Sweden)

    Claudia Guillaume

    2016-01-01

    Full Text Available Extra virgin olive oil shelf-life could be defined as the length of time under normal storage conditions within which no off-flavours or defects are developed and quality parameters such as peroxide value and specific absorbance are retained within accepted limits for this commercial category. Prediction of shelf-life is a desirable goal in the food industry. Even when extra virgin olive oil shelf-life should be one of the most important quality markers for extra virgin olive oil, it is not recognised as a legal parameter in most regulations and standards around the world. The proposed empirical formula to be evaluated in the present study is based on common quality tests with known and predictable result changes over time and influenced by different aspects of extra virgin olive oil with a meaningful influence over its shelf-life. The basic quality tests considered in the formula are Rancimat® or induction time (IND; 1,2-diacylglycerols (DAGs; pyropheophytin a (PPP; and free fatty acids (FFA. This paper reports research into the actual shelf-life of commercially packaged extra virgin olive oils versus the predicted shelf-life of those oils determined by analysing the expected deterioration curves for the three basic quality tests detailed above. Based on the proposed model, shelf-life is predicted by choosing the lowest predicted shelf-life of any of those three tests.

  17. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  18. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  19. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  20. The organization of irrational beliefs in posttraumatic stress symptomology: testing the predictions of REBT theory using structural equation modelling.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.

  1. Statistical tests for equal predictive ability across multiple forecasting methods

    DEFF Research Database (Denmark)

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  2. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  3. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    Science.gov (United States)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  4. Pretest Predictions for Ventilation Tests

    International Nuclear Information System (INIS)

    Y. Sun; H. Yang; H.N. Kalia

    2007-01-01

    The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that can be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only

  5. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  6. Test cell data-based predictive modelling to determine HVAC energy consumption for three façade solutions in Madrid

    Directory of Open Access Journals (Sweden)

    J. Guerrero-Rubio

    2018-01-01

    Full Text Available This study aims to narrow the gap between predicted and actual energy performance in buildings. Predictive models were established that relate the electric consumption by HVAC systems to maintain certain indoor environmental conditions in variable weather to the type of façade. The models were developed using data gathered from test cells with adiabatic envelopes on all but the façade to be tested. Three façade types were studied. The first, the standard solution, consisted in a double wythe brick wall with an intermediate air space, the configuration most commonly deployed in multi-family dwellings built in Spain between 1940 and 1980 (prior to the enactment of the first building codes that limited overall energy demand in buildings. The other two were retrofits frequently found in such buildings: ventilated façades and ETICS (external thermal insulation composite systems. Two predictive models were designed for each type of façade, one for summer and the other for winter. The linear regression equations and the main statistical parameters are reported.

  7. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  8. Testing and Life Prediction for Composite Rotor Hub Flexbeams

    Science.gov (United States)

    Murri, Gretchen B.

    2004-01-01

    A summary of several studies of delamination in tapered composite laminates with internal ply-drops is presented. Initial studies used 2D FE models to calculate interlaminar stresses at the ply-ending locations in linear tapered laminates under tension loading. Strain energy release rates for delamination in these laminates indicated that delamination would likely start at the juncture of the tapered and thin regions and grow unstably in both directions. Tests of glass/epoxy and graphite/epoxy linear tapered laminates under axial tension delaminated as predicted. Nonlinear tapered specimens were cut from a full-size helicopter rotor hub and were tested under combined constant axial tension and cyclic transverse bending loading to simulate the loading experienced by a rotorhub flexbeam in flight. For all the tested specimens, delamination began at the tip of the outermost dropped ply group and grew first toward the tapered region. A 2D FE model was created that duplicated the test flexbeam layup, geometry, and loading. Surface strains calculated by the model agreed very closely with the measured surface strains in the specimens. The delamination patterns observed in the tests were simulated in the model by releasing pairs of MPCs along those interfaces. Strain energy release rates associated with the delamination growth were calculated for several configurations and using two different FE analysis codes. Calculations from the codes agreed very closely. The strain energy release rate results were used with material characterization data to predict fatigue delamination onset lives for nonlinear tapered flexbeams with two different ply-dropping schemes. The predicted curves agreed well with the test data for each case studied.

  9. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  10. Use of Kazakh nuclear explosions for testing dilatancy diffusion model of earthquake prediction

    International Nuclear Information System (INIS)

    Srivastava, H.N.

    1979-01-01

    P wave travel time anomalies from Kazakh explosions during the years 1965-1972 were studied with reference to Jeffreys Bullen (1952) and Herrin Travel time tables (1968) and discussed using F ratio test at seven stations in Himachal Pradesh. For these events, the temporal and spatial variations of travel time residuals were examined from the point of view of long term changes in velocity known to precede earthquakes and local geology. The results show perference for Herrin Travel time tables at these epicentral distances from Kazakh explosions. F ratio test indicated that variation between sample means of different stations in the network showed more variation than can be attributed to the sampling error. Although the spatial variation of mean residuals (1965-1972) could generally be explained on the basis of the local geology, the temporal variations of such residuals from Kazakh explosions offer limited application in the testing of dilatancy model of earthquake prediction. (auth.)

  11. Model-Based Prediction of Pulsed Eddy Current Testing Signals from Stratified Conductive Structures

    International Nuclear Information System (INIS)

    Zhang, Jian Hai; Song, Sung Jin; Kim, Woong Ji; Kim, Hak Joon; Chung, Jong Duk

    2011-01-01

    Excitation and propagation of electromagnetic field of a cylindrical coil above an arbitrary number of conductive plates for pulsed eddy current testing(PECT) are very complex problems due to their complicated physical properties. In this paper, analytical modeling of PECT is established by Fourier series based on truncated region eigenfunction expansion(TREE) method for a single air-cored coil above stratified conductive structures(SCS) to investigate their integrity. From the presented expression of PECT, the coil impedance due to SCS is calculated based on analytical approach using the generalized reflection coefficient in series form. Then the multilayered structures manufactured by non-ferromagnetic (STS301L) and ferromagnetic materials (SS400) are investigated by the developed PECT model. Good prediction of analytical model of PECT not only contributes to the development of an efficient solver but also can be applied to optimize the conditions of experimental setup in PECT

  12. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  13. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test

    DEFF Research Database (Denmark)

    Møller, Jonas Bech; Overgaard, R.V.; Madsen, Henrik

    2010-01-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of ...... obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method is concluded to have high relevance not only in theory but also in practice....

  14. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  15. A Combined Hydrological and Hydraulic Model for Flood Prediction in Vietnam Applied to the Huong River Basin as a Test Case Study

    Directory of Open Access Journals (Sweden)

    Dang Thanh Mai

    2017-11-01

    Full Text Available A combined hydrological and hydraulic model is presented for flood prediction in Vietnam. This model is applied to the Huong river basin as a test case study. Observed flood flows and water surface levels of the 2002–2005 flood seasons are used for model calibration, and those of the 2006–2007 flood seasons are used for validation of the model. The physically based distributed hydrologic model WetSpa is used for predicting the generation and propagation of flood flows in the mountainous upper sub-basins, and proves to predict flood flows accurately. The Hydrologic Engineering Center River Analysis System (HEC-RAS hydraulic model is applied to simulate flood flows and inundation levels in the downstream floodplain, and also proves to predict water levels accurately. The predicted water profiles are used for mapping of inundations in the floodplain. The model may be useful in developing flood forecasting and early warning systems to mitigate losses due to flooding in Vietnam.

  16. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  17. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  18. A new crack growth model for life prediction under random loading

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Chen, Zhi Wei

    1999-01-01

    The load interaction effect in variable amplitude fatigue test is a very important issue for correctly predicting fatigue life. Some prediction methods for retardation are reviewed and the problems discussed. The so-called 'under-load' effect is also of importance for a prediction model to work properly under random load spectrum. A new model that is simple in form but combines overload plastic zone and residual stress considerations together with Elber's closure concept is proposed to fully take account of the load-interaction effects including both over-load and under-load effects. Applying this new model to complex load sequence is explored here. Simulations of tests show the improvement of the new model over other models. The best prediction (mostly closely resembling test curve) is given by the newly proposed Chen-Lee model

  19. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  20. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  1. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. Springback Prediction on Slit-Ring Test

    International Nuclear Information System (INIS)

    Chen Xiaoming; Shi, Ming F.; Ren Feng; Xia, Z. Cedric

    2005-01-01

    Advanced high strength steels (AHSS) are increasingly being used in the automotive industry to reduce vehicle weight while improving vehicle crash performance. One of the concerns in manufacturing is springback control after stamping. Although computer simulation technologies have been successfully applied to predict stamping formability, they still face major challenges in springback prediction, particularly for AHSS. Springback analysis is very complicated and involves large deformation problems in the forming stage and mechanical multiplying effect during the elastic recovery after releasing a part from the die. Therefore, the predictions are very sensitive to the simulation parameters used. It is very critical in springback simulation to choose an appropriate material model, element formulation and contact algorithm. In this study, a springback benchmark test, the slit ring cup, is used in the springback simulation with commercially available finite element analysis (FEA) software, LS-DYNA. The sensitivity of seven simulation variables on springback predictions was investigated, and a set of parameters with stable simulation results was identified. Final simulations using the selected set of parameters were conducted on six different materials including two AHSS steels, two conventional high strength steels, one mild steel and an aluminum alloy. The simulation results are compared with experimental measurements for all six materials and a favorable result is achieved. Simulation errors as compared against test results falls within 10%

  4. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  5. Prediction of material damage in orthotropic metals for virtual structural testing

    OpenAIRE

    Ravindran, S.

    2010-01-01

    Models based on the Continuum Damage Mechanics principle are increasingly used for predicting the initiation and growth of damage in materials. The growing reliance on 3-D finite element (FE) virtual structural testing demands implementation and validation of robust material models that can predict the material behaviour accurately. The use of these models within numerical analyses requires suitable material data. EU aerospace companies along with Cranfield University and other similar resear...

  6. Routine blood tests to predict liver fibrosis in chronic hepatitis C.

    Science.gov (United States)

    Hsieh, Yung-Yu; Tung, Shui-Yi; Lee, Kamfai; Wu, Cheng-Shyong; Wei, Kuo-Liang; Shen, Chien-Heng; Chang, Te-Sheng; Lin, Yi-Hsiung

    2012-02-28

    To verify the usefulness of FibroQ for predicting fibrosis in patients with chronic hepatitis C, compared with other noninvasive tests. This retrospective cohort study included 237 consecutive patients with chronic hepatitis C who had undergone percutaneous liver biopsy before treatment. FibroQ, aspartate aminotransferase (AST)/alanine aminotransferase ratio (AAR), AST to platelet ratio index, cirrhosis discriminant score, age-platelet index (API), Pohl score, FIB-4 index, and Lok's model were calculated and compared. FibroQ, FIB-4, AAR, API and Lok's model results increased significantly as fibrosis advanced (analysis of variance test: P fibrosis score in chronic hepatitis C compared with other noninvasive tests. FibroQ is a simple and useful test for predicting significant fibrosis in patients with chronic hepatitis C.

  7. Linking removal targets to the ecological effects of invaders: a predictive model and field test.

    Science.gov (United States)

    Green, Stephanie J; Dulvy, Nicholas K; Brooks, Annabelle M L; Akins, John L; Cooper, Andrew B; Miller, Skylar; Côté, Isabelle M

    Species invasions have a range of negative effects on recipient ecosystems, and many occur at a scale and magnitude that preclude complete eradication. When complete extirpation is unlikely with available management resources, an effective strategy may be to suppress invasive populations below levels predicted to cause undesirable ecological change. We illustrated this approach by developing and testing targets for the control of invasive Indo-Pacific lionfish (Pterois volitans and P. miles) on Western Atlantic coral reefs. We first developed a size-structured simulation model of predation by lionfish on native fish communities, which we used to predict threshold densities of lionfish beyond which native fish biomass should decline. We then tested our predictions by experimentally manipulating lionfish densities above or below reef-specific thresholds, and monitoring the consequences for native fish populations on 24 Bahamian patch reefs over 18 months. We found that reducing lionfish below predicted threshold densities effectively protected native fish community biomass from predation-induced declines. Reductions in density of 25–92%, depending on the reef, were required to suppress lionfish below levels predicted to overconsume prey. On reefs where lionfish were kept below threshold densities, native prey fish biomass increased by 50–70%. Gains in small (15 cm total length), including ecologically important grazers and economically important fisheries species, had increased by 10–65% by the end of the experiment. Crucially, similar gains in prey fish biomass were realized on reefs subjected to partial and full removal of lionfish, but partial removals took 30% less time to implement. By contrast, the biomass of small native fishes declined by >50% on all reefs with lionfish densities exceeding reef-specific thresholds. Large inter-reef variation in the biomass of prey fishes at the outset of the study, which influences the threshold density of lionfish

  8. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Profile control simulations and experiments on TCV : A controller test environment and results using a model-based predictive controller

    NARCIS (Netherlands)

    Maljaars, E.; Felici, F.; Blanken, T.C.; Galperti, C.; Sauter, O.; de Baar, M.R.; Carpanese, F.; Goodman, T.P.; Kim, D.; Kim, S.H.; Kong, M.G.; Mavkov, B.; Merle, A.; Moret, J.M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.A.; Vu, N.M.T.

    2017-01-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety

  10. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    NARCIS (Netherlands)

    Maljaars, B.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J.; Nouailletas, R.; Scheffer, M.; Teplukhina, A.; Vu, T.

    2017-01-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety

  11. AGR-5/6/7 Irradiation Test Predictions using PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    Skerjanc, William F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-09-14

    PARFUME, (PARticle FUel ModEl) a fuel performance modeling code used for high temperature gas-cooled reactors (HTGRs), was used to model the Advanced Gas Reactor (AGR)-5/6/7 irradiation test using predicted physics and thermal hydraulics data. The AGR-5/6/7 test consists of the combined fifth, sixth, and seventh planned irradiations of the AGR Fuel Development and Qualification Program. The AGR-5/6/7 test train is a multi-capsule, instrumented experiment that is designed for irradiation in the 133.4-mm diameter north east flux trap (NEFT) position of Advanced Test Reactor (ATR). Each capsule contains compacts filled with uranium oxycarbide (UCO) unaltered fuel particles. This report documents the calculations performed to predict the failure probability of tristructural isotropic (TRISO)-coated fuel particles during the AGR-5/6/7 experiment. In addition, this report documents the calculated source term from the driver fuel. The calculations include modeling of the AGR-5/6/7 irradiation that is scheduled to occur from October 2017 to April 2021 over a total of 13 ATR cycles, including nine normal cycles and four Power Axial Locator Mechanism (PALM) cycle for a total between 500 – 550 effective full power days (EFPD). The irradiation conditions and material properties of the AGR-5/6/7 test predicted zero fuel particle failures in Capsules 1, 2, and 4. Fuel particle failures were predicted in Capsule 3 due to internal particle pressure. These failures were predicted in the highest temperature compacts. Capsule 5 fuel particle failures were due to inner pyrolytic carbon (IPyC) cracking causing localized stresses concentrations in the SiC layer. This capsule predicted the highest particle failures due to the lower irradiation temperature. In addition, shrinkage of the buffer and IPyC layer during irradiation resulted in formation of a buffer-IPyC gap. The two capsules at the two ends of the test train, Capsules 1 and 5 experienced the smallest buffer-IPyC gap

  12. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. Development of laboratory acceleration test method for service life prediction of concrete structures

    International Nuclear Information System (INIS)

    Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.

    1999-01-01

    Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc

  14. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  15. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  16. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  17. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  18. Testing a cognitive model to predict posttraumatic stress disorder following childbirth.

    Science.gov (United States)

    King, Lydia; McKenzie-McHarg, Kirstie; Horsch, Antje

    2017-01-14

    One third of women describes their childbirth as traumatic and between 0.8 and 6.9% goes on to develop posttraumatic stress disorder (PTSD). The cognitive model of PTSD has been shown to be applicable to a range of trauma samples. However, childbirth is qualitatively different to other trauma types and special consideration needs to be taken when applying it to this population. Previous studies have investigated some cognitive variables in isolation but no study has so far looked at all the key processes described in the cognitive model. This study therefore aimed to investigate whether theoretically-derived variables of the cognitive model explain unique variance in postnatal PTSD symptoms when key demographic, obstetric and clinical risk factors are controlled for. One-hundred and fifty-seven women who were between 1 and 12 months post-partum (M = 6.5 months) completed validated questionnaires assessing PTSD and depressive symptoms, childbirth experience, postnatal social support, trauma memory, peritraumatic processing, negative appraisals, dysfunctional cognitive and behavioural strategies and obstetric as well as demographic risk factors in an online survey. A PTSD screening questionnaire suggested that 5.7% of the sample might fulfil diagnostic criteria for PTSD. Overall, risk factors alone predicted 43% of variance in PTSD symptoms and cognitive behavioural factors alone predicted 72.7%. A final model including both risk factors and cognitive behavioural factors explained 73.7% of the variance in PTSD symptoms, 37.1% of which was unique variance predicted by cognitive factors. All variables derived from Ehlers and Clark's cognitive model significantly explained variance in PTSD symptoms following childbirth, even when clinical, demographic and obstetric were controlled for. Our findings suggest that the CBT model is applicable and useful as a way of understanding and informing the treatment of PTSD following childbirth.

  19. Perfectionism, weight and shape concerns, and low self-esteem: Testing a model to predict bulimic symptoms.

    Science.gov (United States)

    La Mela, Carmelo; Maglietta, Marzio; Caini, Saverio; Casu, Giuliano P; Lucarelli, Stefano; Mori, Sara; Ruggiero, Giovanni Maria

    2015-12-01

    Previous studies have tested multivariate models of bulimia pathology development, documenting that a confluence of perfectionism, body dissatisfaction, and low self-esteem is predictive of disordered eating. However, attempts to replicate these results have yielded controversial findings. The objective of the present study was to test an interactive model of perfectionism, weight and shape concerns, and self-esteem in a sample of patients affected by Eating Disorder (ED). One-hundred-sixty-seven ED patients received the Structured Clinical Interview for DSM-IV Axis I (SCID-I), and they completed the Eating Disorder Examination Questionnaire (EDE-Q), the Rosenberg Self-Esteem Scale (RSES), and the Multidimensional Perfectionism Scale (MPS-F). Several mediation analysis models were fit to test whether causal effects of concern over weight and shape on the frequency of bulimic episodes were mediated by perfectionism and moderated by low levels of self-esteem. Contrary to our hypotheses, we found no evidence that the causal relationship investigated was mediated by any of the dimensions of perfectionism. As a secondary finding, the dimensions of perfectionism, perceived criticism and parental expectations, were significantly correlated with the presence of bulimic symptoms. The validity of the interactive model remains controversial, and may be limited by an inadequate conceptualization of the perfectionism construct.

  20. Theoretical prediction of pKa in methanol: testing SM8 and SMD models for carboxylic acids, phenols, and amines.

    Science.gov (United States)

    Miguel, Elizabeth L M; Silva, Poliana L; Pliego, Josefredo R

    2014-05-29

    Methanol is a widely used solvent for chemical reactions and has solvation properties similar to those of water. However, the performance of continuum solvation models in this solvent has not been tested yet. In this report, we have investigated the performance of the SM8 and SMD models for pKa prediction of 26 carboxylic acids, 24 phenols, and 23 amines in methanol. The gas phase contribution was included at the X3LYP/TZVPP+diff//X3LYP/DZV+P(d) level. Using the proton exchange reaction with acetic acid, phenol, and ammonia as reference species leads to RMS error in the range of 1.4 to 3.6 pKa units. This finding suggests that the performance of the continuum models for methanol is similar to that found for aqueous solvent. Application of simple empirical correction through a linear equation leads to accurate pKa prediction, with uncertainty less than 0.8 units with the SM8 method. Testing with the less expensive PBE1PBE/6-311+G** method results in a slight improvement in the results.

  1. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  2. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  3. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  4. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  5. A Dutch test with the NewProd-model

    NARCIS (Netherlands)

    Bronnenberg, J.J.A.M.; van Engelen, M.L.

    1988-01-01

    The paper contains a report of a test of Cooper's NewProd model for predicting success and failure of product development projects. Based on Canadian data, the model has been shown to make predictions which are 84% correct. Having reservations on the reliability and validity of the model on

  6. Thermal hydraulic test for reactor safety system - Critical heat flux experiment and development of prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Heung; Baek, Won Pil; Yang, Soo Hyung; No, Chang Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    2000-04-01

    To acquire CHF data through the experiments and develop prediction models, research was conducted. Final objectives of research are as follows: 1) Production of tube CHF data for low and middle pressure and mass flux and Flow Boiling Visualization. 2) Modification and suggestion of tube CHF prediction models. 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. The major results of research are as follows: 1) Production of the CHF data for low and middle pressure and mass flux. - Acquisition of CHF data (764) for low and middle pressure and flow conditions - Analysis of CHF trends based on the CHF data - Assessment of existing CHF prediction methods with the CHF data 2) Modification and suggestion of tube CHF prediction models. - Development of a unified CHF model applicable for a wide parametric range - Development of a threshold length correlation - Improvement of CHF look-up table using the threshold length correlation 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. - Development of bundle CHF prediction methodology using correction factor. 11 refs., 134 figs., 25 tabs. (Author)

  7. Impact of Relationships between Test and Reference Animals and between Reference Animals on Reliability of Genomic Prediction

    DEFF Research Database (Denmark)

    Wu, Xiaoping; Lund, Mogens Sandø; Sun, Dongxiao

    This study investigated reliability of genomic prediction in various scenarios with regard to relationship between test and reference animals and between animals within the reference population. Different reference populations were generated from EuroGenomics data and 1288 Nordic Holstein bulls...... as a common test population. A GBLUP model and a Bayesian mixture model were applied to predict Genomic breeding values for bulls in the test data. Result showed that a closer relationship between test and reference animals led to a higher reliability, while a closer relationship between reference animal...... resulted in a lower reliability. Therefore, the design of reference population is important for improving the reliability of genomic prediction. With regard to model, the Bayesian mixture model in general led to slightly a higher reliability of genomic prediction than the GBLUP model...

  8. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  9. In situ gaseous tracer diffusion experiments and predictive modeling at the Greater Confinement Disposal Test

    International Nuclear Information System (INIS)

    Olson, M.C.

    1985-07-01

    The Greater Confinement Disposal Test (GCDT) at the Nevada Test Site is a research project investigating the feasibility of augered shaft disposal of low-level radioactive waste considered unsuitable for shallow land burial. The GCDT contains environmentally mobile and high-specific-activity sources. Research is focused on providing a set of analytically derived hydrogeologic parameters and an empirical database for application in a multiphase, two-dimensional, transient, predictive performance model. Potential contaminant transport processes at the GCDT are identified and their level of significance is detailed. Nonisothermal gaseous diffusion through alluvial sediments is considered the primary waste migration process. Volatile organic tracers are released in the subsurface and their migration is monitored in situ to determine media effective diffusion coefficients, tortuosity, and sorption-corrected porosity terms. The theoretical basis for volatile tracer experiments is presented. Treatment of thermal and liquid flow components is discussed, as is the basis for eliminating several negligible transport processes. Interpretive techniques include correlation, power spectra, and least squares analysis, a graphical analytical solution, and inverse numerical modeling. Model design and application to the GCDT are discussed. GCDT structural, analytical, and computer facilities are detailed. The status of the current research program is reviewed, and temperature and soil moisture profiles are presented along with results of operational tests on the analytical system. 72 refs., 39 figs., 2 tabs

  10. Prediction to natural circulation in semiscale SBLOCA test, S-NC-8B

    International Nuclear Information System (INIS)

    Bang, Young Seok; Seul, Kwang Won; Lee, Sukho; Kim, Hho Jung

    1995-01-01

    Natural circulation and the associated thermal-hydraulic behavior are predicted by RELAP5/MOD3.1 code against the test S-NC-8B, which simulated 0.1% equivalent SBLOCA in PWR. The Semiscale Mod-2A facility and the test-specific initial/boundary condition are modeled. The calculation result is compared with the experiment data in terms of natural circulation characteristic and the code predictability is evaluated on natural circulation. As a result, flow rate during single-and two-phase natural circulation modes is well predicted and slightly overpredicted with oscillation in transition and reflux regimes. Additional sensitivity calculations are attempted with different discharge coefficient and break modeling to investigate the break flow effect

  11. Hybrid Prediction Model of the Temperature Field of a Motorized Spindle

    Directory of Open Access Journals (Sweden)

    Lixiu Zhang

    2017-10-01

    Full Text Available The thermal characteristics of a motorized spindle are the main determinants of its performance, and influence the machining accuracy of computer numerical control machine tools. It is important to accurately predict the thermal field of a motorized spindle during its operation to improve its thermal characteristics. This paper proposes a model to predict the temperature field of a high-speed and high-precision motorized spindle under different working conditions using a finite element model and test data. The finite element model considers the influence of the parameters of the cooling system and the lubrication system, and that of environmental conditions on the coefficient of heat transfer based on test data for the surface temperature of the motorized spindle. A genetic algorithm is used to optimize the coefficient of heat transfer of the spindle, and its temperature field is predicted using a three-dimensional model that employs this optimal coefficient. A prediction model of the 170MD30 temperature field of the motorized spindle is created and simulation data for the temperature field are compared with the test data. The results show that when the speed of the spindle is 10,000 rpm, the relative mean prediction error is 1.5%, and when its speed is 15,000 rpm, the prediction error is 3.6%. Therefore, the proposed prediction model can predict the temperature field of the motorized spindle with high accuracy.

  12. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  13. Prediction model for oxide thickness on aluminum alloy cladding during irradiation

    International Nuclear Information System (INIS)

    Kim, Yeon Soo; Hofman, G.L.; Hanan, N.A.; Snelgrove, J.L.

    2003-01-01

    An empirical model predicting the oxide film thickness on aluminum alloy cladding during irradiation has been developed as a function of irradiation time, temperature, heat flux, pH, and coolant flow rate. The existing models in the literature are neither consistent among themselves nor fit the measured data very well. They also lack versatility for various reactor situations such as a pH other than 5, high coolant flow rates, and fuel life longer than ∼1200 hrs. Particularly, they were not intended for use in irradiation situations. The newly developed model is applicable to these in-reactor situations as well as ex-reactor tests, and has a more accurate prediction capability. The new model demonstrated with consistent predictions to the measured data of UMUS and SIMONE fuel tests performed in the HFR, Petten, tests results from the ORR, and IRIS tests from the OSIRIS and to the data from the out-of-pile tests available in the literature as well. (author)

  14. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. The experiences show that the operational reliability is higher than the test reliability User's interest is on the operational reliability rather than on the test reliability, however. With the assumption that the difference in reliability results from the change of environment, testing environment factors comprising the aging factor and the coverage factor are defined in this study to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results are close to the actual data

  15. Considerations of the Use of 3-D Geophysical Models to Predict Test Ban Monitoring Observables

    Science.gov (United States)

    2007-09-01

    predict first P arrival times. Since this is a 3-D model, the travel times are predicted with a 3-D finite-difference code solving the eikonal equations...for the eikonal wave equation should provide more accurate predictions of travel-time from 3D models. These techniques and others are being

  16. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  17. Direct-to-consumer advertising of predictive genetic tests: a health belief model based examination of consumer response.

    Science.gov (United States)

    Rollins, Brent L; Ramakrishnan, Shravanan; Perri, Matthew

    2014-01-01

    Direct-to-consumer (DTC) advertising of predictive genetic tests (PGTs) has added a new dimension to health advertising. This study used an online survey based on the health belief model framework to examine and more fully understand consumers' responses and behavioral intentions in response to a PGT DTC advertisement. Overall, consumers reported moderate intentions to talk with their doctor and seek more information about PGTs after advertisement exposure, though consumers did not seem ready to take the advertised test or engage in active information search. Those who perceived greater threat from the disease, however, had significantly greater behavioral intentions and information search behavior.

  18. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  19. Prediction of Glucose Tolerance without an Oral Glucose Tolerance Test

    Directory of Open Access Journals (Sweden)

    Rohit Babbar

    2018-03-01

    Full Text Available IntroductionImpaired glucose tolerance (IGT is diagnosed by a standardized oral glucose tolerance test (OGTT. However, the OGTT is laborious, and when not performed, glucose tolerance cannot be determined from fasting samples retrospectively. We tested if glucose tolerance status is reasonably predictable from a combination of demographic, anthropometric, and laboratory data assessed at one time point in a fasting state.MethodsGiven a set of 22 variables selected upon clinical feasibility such as sex, age, height, weight, waist circumference, blood pressure, fasting glucose, HbA1c, hemoglobin, mean corpuscular volume, serum potassium, fasting levels of insulin, C-peptide, triglyceride, non-esterified fatty acids (NEFA, proinsulin, prolactin, cholesterol, low-density lipoprotein, HDL, uric acid, liver transaminases, and ferritin, we used supervised machine learning to estimate glucose tolerance status in 2,337 participants of the TUEF study who were recruited before 2012. We tested the performance of 10 different machine learning classifiers on data from 929 participants in the test set who were recruited after 2012. In addition, reproducibility of IGT was analyzed in 78 participants who had 2 repeated OGTTs within 1 year.ResultsThe most accurate prediction of IGT was reached with the recursive partitioning method (accuracy = 0.78. For all classifiers, mean accuracy was 0.73 ± 0.04. The most important model variable was fasting glucose in all models. Using mean variable importance across all models, fasting glucose was followed by NEFA, triglycerides, HbA1c, and C-peptide. The accuracy of predicting IGT from a previous OGTT was 0.77.ConclusionMachine learning methods yield moderate accuracy in predicting glucose tolerance from a wide set of clinical and laboratory variables. A substitution of OGTT does not currently seem to be feasible. An important constraint could be the limited reproducibility of glucose tolerance status during a

  20. TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J; Wu, Q.J.; Yin, F; Kirkpatrick, J; Cabrera, A [Duke University Medical Center, Durham, NC (United States); Ge, Y [University of North Carolina at Charlotte, Charlotte, NC (United States)

    2014-06-15

    Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into five groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH

  1. TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT

    International Nuclear Information System (INIS)

    Liu, J; Wu, Q.J.; Yin, F; Kirkpatrick, J; Cabrera, A; Ge, Y

    2014-01-01

    Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into five groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH

  2. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Allen DavidB

    2009-09-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a "school-based" prediction of IR to a "laboratory-based" prediction, using various measures of fitness and body composition. Methods. Middle school children ( performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing ( max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index []. Results. PACER showed a strong correlation with max/kg ( = 0.83, and with ( = , . Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  3. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  4. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  5. Analysis and prediction of rainfall trends over Bangladesh using Mann-Kendall, Spearman's rho tests and ARIMA model

    Science.gov (United States)

    Rahman, Mohammad Atiqur; Yunsheng, Lou; Sultana, Nahid

    2017-08-01

    In this study, 60-year monthly rainfall data of Bangladesh were analysed to detect trends. Modified Mann-Kendall, Spearman's rho tests and Sen's slope estimators were applied to find the long-term annual, dry season and monthly trends. Sequential Mann-Kendall analysis was applied to detect the potential trend turning points. Spatial variations of the trends were examined using inverse distance weighting (IDW) interpolation. AutoRegressive integrated moving average (ARIMA) model was used for the country mean rainfall and for other two stations data which depicted the highest and the lowest trend in the Mann-Kendall and Spearman's rho tests. Results showed that there is no significant trend in annual rainfall pattern except increasing trends for Cox's Bazar, Khulna, Satkhira and decreasing trend for Srimagal areas. For the dry season, only Bogra area represented significant decreasing trend. Long-term monthly trends demonstrated a mixed pattern; both negative and positive changes were found from February to September. Comilla area showed a significant decreasing trend for consecutive 3 months while Rangpur and Khulna stations confirmed the significant rising trends for three different months in month-wise trends analysis. Rangpur station data gave a maximum increasing trend in April whereas a maximum decreasing trend was found in August for Comilla station. ARIMA models predict +3.26, +8.6 and -2.30 mm rainfall per year for the country, Cox's Bazar and Srimangal areas, respectively. However, all the test results and predictions revealed a good agreement among them in the study.

  6. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    Science.gov (United States)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  7. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  8. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  9. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  10. Trait-based representation of biological nitrification: Model development, testing, and predicted community composition

    Directory of Open Access Journals (Sweden)

    Nick eBouskill

    2012-10-01

    Full Text Available Trait-based microbial models show clear promise as tools to represent the diversity and activity of microorganisms across ecosystem gradients. These models parameterize specific traits that determine the relative fitness of an ‘organism’ in a given environment, and represent the complexity of biological systems across temporal and spatial scales. In this study we introduce a microbial community trait-based modeling framework (MicroTrait focused on nitrification (MicroTrait-N that represents the ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA and nitrite oxidizing bacteria (NOB using traits related to enzyme kinetics and physiological properties. We used this model to predict nitrifier diversity, ammonia (NH3 oxidation rates and nitrous oxide (N2O production across pH, temperature and substrate gradients. Predicted nitrifier diversity was predominantly determined by temperature and substrate availability, the latter was strongly influenced by pH. The model predicted that transient N2O production rates are maximized by a decoupling of the AOB and NOB communities, resulting in an accumulation and detoxification of nitrite to N2O by AOB. However, cumulative N2O production (over six month simulations is maximized in a system where the relationship between AOB and NOB is maintained. When the reactions uncouple, the AOB become unstable and biomass declines rapidly, resulting in decreased NH3 oxidation and N2O production. We evaluated this model against site level chemical datasets from the interior of Alaska and accurately simulated NH3 oxidation rates and the relative ratio of AOA:AOB biomass. The predicted community structure and activity indicate (a parameterization of a small number of traits may be sufficient to broadly characterize nitrifying community structure and (b changing decadal trends in climate and edaphic conditions could impact nitrification rates in ways that are not captured by extant biogeochemical models.

  11. Evaluation of accelerated test parameters for CMOS IC total dose hardness prediction

    International Nuclear Information System (INIS)

    Sogoyan, A.V.; Nikiforov, A.Y.; Chumakov, A.I.

    1999-01-01

    The approach to accelerated test parameters evaluation is presented in order to predict CMOS IC total dose behavior in variable dose-rate environment. The technique is based on the analytical model of MOSFET parameters total dose degradation. The simple way to estimate model parameter is proposed using IC's input-output MOSFET radiation test results. (authors)

  12. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    Science.gov (United States)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the

  13. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  14. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys).

    Science.gov (United States)

    Năpăruş, Magdalena; Kuntner, Matjaž

    2012-01-01

    Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive

  15. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys.

    Directory of Open Access Journals (Sweden)

    Magdalena Năpăruş

    Full Text Available BACKGROUND: Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. METHODOLOGY/PRINCIPAL FINDINGS: We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World, N. livida (Madagascar, N. malabarensis (S-SE Asia, and N. papuana (Australasia. For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range, a large part of Brazil and the Guianas (area of synanthropic spread, and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (subtropics. CONCLUSIONS: Our model is a customizable GIS tool intended to predict current and future potential

  16. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  17. A two-parameter model to predict fracture in the transition

    International Nuclear Information System (INIS)

    DeAquino, C.T.; Landes, J.D.; McCabe, D.E.

    1995-01-01

    A model is proposed that uses a numerical characterization of the crack tip stress field modified by the J - Q constraint theory and a weak link assumption to predict fracture behavior in the transition for reactor vessel steels. This model predicts the toughness scatter band for a component model from a toughness scatter band measured on a test specimen geometry. The model has been applied previously to two-dimensional through cracks. Many applications to actual components structures involve three-dimensional surface flaws. These cases require a more difficult level of analysis and need additional information. In this paper, both the current model for two-dimensional cracks and an approach needed to extend the model for the prediction of transition fracture behavior in three-dimensional surface flaws are discussed. Examples are presented to show how the model can be applied and in some cases to compare with other test results. (author). 13 refs., 7 figs

  18. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    Science.gov (United States)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for component-loaded curved orthogrid panels typical of launch vehicle skin structures. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was applied to correlate the measured input sound pressures across the energized panel. This application quantifies the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software developed for the RPTF method allows easy replacement of the diffuse acoustic field with other pressure fields such as a turbulent boundary layer (TBL) model suitable for vehicle ascent. Structural responses

  19. Huntington's disease : Psychological aspects of predictive testing

    NARCIS (Netherlands)

    Timman, Reinier

    2005-01-01

    Predictive testing for Huntington's disease appears to have long lasting psychological effects. The predictive test for Huntington's disease (HD), a hereditary disease of the nervous system, was introduced in the Netherlands in the late eighties. As adverse consequences of the test were

  20. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  1. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  3. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Todd Varness

    2009-01-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a “school-based” prediction of IR to a “laboratory-based” prediction, using various measures of fitness and body composition. Methods. Middle school children (n=82 performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing (VO2 max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index [HOMAIR]. Results. PACER showed a strong correlation with VO2 max/kg (rs = 0.83, P<.001 and with HOMAIR (rs = −0.60, P<.001. Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using VO2 max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  4. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  5. Seismic response prediction for cabinets of nuclear power plants by using impact hammer test

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Ki Young [Department of Civil and Structural Engineering, University of Sheffield, Sheffield (United Kingdom); Gook Cho, Sung [JACE KOREA, Gyeonggi-do (Korea, Republic of); Cui, Jintao [Department of Civil Engineering, Kunsan National University, Jeonbuk (Korea, Republic of); Kim, Dookie, E-mail: kim2kie@kunsan.ac.k [Department of Civil Engineering, Kunsan National University, Jeonbuk (Korea, Republic of)

    2010-10-15

    An effective method to predict the seismic response of electrical cabinets of nuclear power plants is developed. This method consists of three steps: (1) identification of the earthquake-equivalent force based on the idealized lumped-mass system of the cabinet, (2) identification of the state-space equation (SSE) model of the system using input-output measurements from impact hammer tests, and (3) seismic response prediction by calculating the output of the identified SSE model under the identified earthquake-equivalent force. A three-dimensional plate model of cabinet structures is presented for the numerical verification of the proposed method. Experimental validation of the proposed method is carried out on a three-story frame which represents the structure of a cabinet. The SSE model of the frame is accurately identified by impact hammer tests with high fitness values over 85% of the actual frame characteristics. Shaking table tests are performed using El Centro, Kobe, and Northridge earthquakes as input motions and the acceleration responses are measured. The responses of the model under the three earthquakes are predicted and then compared with the measured responses. The predicted and measured responses agree well with each other with fitness values of 65-75%. The proposed method is more advantageous over other methods that are based on finite element (FE) model updating since it is free from FE modeling errors. It will be especially effective for cabinet structures in nuclear power plants where conducting shaking table tests may not be feasible. Limitations of the proposed method are also discussed.

  6. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  7. Pulsatile fluidic pump demonstration and predictive model application

    International Nuclear Information System (INIS)

    Morgan, J.G.; Holland, W.D.

    1986-04-01

    Pulsatile fluidic pumps were developed as a remotely controlled method of transferring or mixing feed solutions. A test in the Integrated Equipment Test facility demonstrated the performance of a critically safe geometry pump suitable for use in a 0.1-ton/d heavy metal (HM) fuel reprocessing plant. A predictive model was developed to calculate output flows under a wide range of external system conditions. Predictive and experimental flow rates are compared for both submerged and unsubmerged fluidic pump cases

  8. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  9. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  10. Empirical component model to predict the overall performance of heating coils: Calibrations and tests based on manufacturer catalogue data

    International Nuclear Information System (INIS)

    Ruivo, Celestino R.; Angrisani, Giovanni

    2015-01-01

    Highlights: • An empirical model for predicting the performance of heating coils is presented. • Low and high heating capacity cases are used for calibration. • Versions based on several effectiveness correlations are tested. • Catalogue data are considered in approach testing. • The approach is a suitable component model to be used in dynamic simulation tools. - Abstract: A simplified methodology for predicting the overall behaviour of heating coils is presented in this paper. The coil performance is predicted by the ε-NTU method. Usually manufacturers do not provide information about the overall thermal resistance or the geometric details that are required either for the device selection or to apply known empirical correlations for the estimation of the involved thermal resistances. In the present work, heating capacity tables from the manufacturer catalogue are used to calibrate simplified approaches based on the classical theory of heat exchangers, namely the effectiveness method. Only two reference operating cases are required to calibrate each approach. The validity of the simplified approaches is investigated for a relatively high number of operating cases, listed in the technical catalogue of a manufacturer. Four types of coils of three sizes of air handling units are considered. A comparison is conducted between the heating coil capacities provided by the methodology and the values given by the manufacturer catalogue. The results show that several of the proposed approaches are suitable component models to be integrated in dynamic simulation tools of air conditioning systems such as TRNSYS or EnergyPlus

  11. Numerical predictions of particle dispersed two-phase flows, using the LSD and SSF models

    International Nuclear Information System (INIS)

    Avila, R.; Cervantes de Gortari, J.; Universidad Nacional Autonoma de Mexico, Mexico City. Facultad de Ingenieria)

    1988-01-01

    A modified version of a numerical scheme which is suitable to predict parabolic dispersed two-phase flow, is presented. The original version of this scheme was used to predict the test cases discussed during the 3rd workshop on TPF predictions in Belgrade, 1986. In this paper, two particle dispersion models are included which use the Lagrangian approach predicting test case 1 and 3 of the 4th workshop. For the prediction of test case 1 the Lagrangian Stochastic Deterministic model (LSD) is used providing acceptable good results of mean and turbulent quantities for both solid and gas phases; however, the computed void fraction distribution is not in agreement with the measurements at locations away from the inlet, especially near the walls. Test case 3 is predicted using both the LSD and the Stochastic Separated Flow (SSF) models. It was found that the effects of turbulence modulation are large when the LSD model is used, whereas the particles have a negligible influence on the continuous phase if the SSF model is utilized for the computations. Predictions of gas phase properties based on both models agree well with measurements; however, the agreement between calculated and measured solid phase properties is less satisfactory. (orig.)

  12. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  13. PREDICTING THE EFFECTIVENESS OF WEB INFORMATION SYSTEMS USING NEURAL NETWORKS MODELING: FRAMEWORK & EMPIRICAL TESTING

    Directory of Open Access Journals (Sweden)

    Dr. Kamal Mohammed Alhendawi

    2018-02-01

    Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.

  14. Predictive Models of Li-ion Battery Lifetime (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G.; Shi, Y.; Pesaran, A.

    2014-09-01

    Predictive models of Li-ion battery reliability must consider a multiplicity of electrochemical, thermal and mechanical degradation modes experienced by batteries in application environments. Complicating matters, Li-ion batteries can experience several path dependent degradation trajectories dependent on storage and cycling history of the application environment. Rates of degradation are controlled by factors such as temperature history, electrochemical operating window, and charge/discharge rate. Lacking accurate models and tests, lifetime uncertainty must be absorbed by overdesign and warranty costs. Degradation models are needed that predict lifetime more accurately and with less test data. Models should also provide engineering feedback for next generation battery designs. This presentation reviews both multi-dimensional physical models and simpler, lumped surrogate models of battery electrochemical and mechanical degradation. Models are compared with cell- and pack-level aging data from commercial Li-ion chemistries. The analysis elucidates the relative importance of electrochemical and mechanical stress-induced degradation mechanisms in real-world operating environments. Opportunities for extending the lifetime of commercial battery systems are explored.

  15. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  16. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  17. Action Prediction Allows Hypothesis Testing via Internal Forward Models at 6 Months of Age

    Directory of Open Access Journals (Sweden)

    Gustaf Gredebäck

    2018-03-01

    Full Text Available We propose that action prediction provides a cornerstone in a learning process known as internal forward models. According to this suggestion infants’ predictions (looking to the mouth of someone moving a spoon upward will moments later be validated or proven false (spoon was in fact directed toward a bowl, information that is directly perceived as the distance between the predicted and actual goal. Using an individual difference approach we demonstrate that action prediction correlates with the tendency to react with surprise when social interactions are not acted out as expected (action evaluation. This association is demonstrated across tasks and in a large sample (n = 118 at 6 months of age. These results provide the first indication that infants might rely on internal forward models to structure their social world. Additional analysis, consistent with prior work and assumptions from embodied cognition, demonstrates that the latency of infants’ action predictions correlate with the infant’s own manual proficiency.

  18. Post-test comparison of thermal-hydrologic measurements and numerical predictions for the in situ single heater test, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Ballard, S.; Francis, N.D.; Sobolik, S.R.; Finley, R.E.

    1998-01-01

    The Single Heater Test (SHT) is a sixteen-month-long heating and cooling experiment begun in August, 1996, located underground within the unsaturated zone near the potential geologic repository at Yucca Mountain, Nevada. During the 9 month heating phase of the test, roughly 15 m 3 of rock were raised to temperatures exceeding 100 C. In this paper, temperatures measured in sealed boreholes surrounding the heater are compared to temperatures predicted by 3D thermal-hydrologic calculations performed with a finite difference code. Three separate model runs using different values of bulk rock permeability (4 microdarcy to 5.2 darcy) yielded significantly different predicted temperatures and temperature distributions. All the models differ from the data, suggesting that to accurately model the thermal-hydrologic behavior of the SHT, the Equivalent Continuum Model (ECM), the conceptual basis for dealing with the fractured porous medium in the numerical predictions, should be discarded in favor of more sophisticated approaches

  19. Pretest Predictions for Phase II Ventilation Tests

    International Nuclear Information System (INIS)

    Yiming Sun

    2001-01-01

    The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, and concrete pipe walls that will be developed during the Phase II ventilation tests involving various test conditions. The results will be used as inputs to validating numerical approach for modeling continuous ventilation, and be used to support the repository subsurface design. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the Phase II ventilation tests, and describe numerical methods that are used to calculate the effects of continuous ventilation. The calculation is limited to thermal effect only. This engineering work activity is conducted in accordance with the ''Technical Work Plan for: Subsurface Performance Testing for License Application (LA) for Fiscal Year 2001'' (CRWMS M and O 2000d). This technical work plan (TWP) includes an AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', activity evaluation (CRWMS M and O 2000d, Addendum A) that has determined this activity is subject to the YMP quality assurance (QA) program. The calculation is developed in accordance with the AP-3.12Q procedure, ''Calculations''. Additional background information regarding this activity is contained in the ''Development Plan for Ventilation Pretest Predictive Calculation'' (DP) (CRWMS M and O 2000a)

  20. Forming limit curves of DP600 determined in high-speed Nakajima tests and predicted by two different strain-rate-sensitive models

    Science.gov (United States)

    Weiß-Borkowski, Nathalie; Lian, Junhe; Camberg, Alan; Tröster, Thomas; Münstermann, Sebastian; Bleck, Wolfgang; Gese, Helmut; Richter, Helmut

    2018-05-01

    Determination of forming limit curves (FLC) to describe the multi-axial forming behaviour is possible via either experimental measurements or theoretical calculations. In case of theoretical determination, different models are available and some of them consider the influence of strain rate in the quasi-static and dynamic strain rate regime. Consideration of the strain rate effect is necessary as many material characteristics such as yield strength and failure strain are affected by loading speed. In addition, the start of instability and necking depends not only on the strain hardening coefficient but also on the strain rate sensitivity parameter. Therefore, the strain rate dependency of materials for both plasticity and the failure behaviour is taken into account in crash simulations for strain rates up to 1000 s-1 and FLC can be used for the description of the material's instability behaviour at multi-axial loading. In this context, due to the strain rate dependency of the material behaviour, an extrapolation of the quasi-static FLC to dynamic loading condition is not reliable. Therefore, experimental high-speed Nakajima tests or theoretical models shall be used to determine the FLC at high strain rates. In this study, two theoretical models for determination of FLC at high strain rates and results of experimental high-speed Nakajima tests for a DP600 are presented. One of the theoretical models is the numerical algorithm CRACH as part of the modular material and failure model MF GenYld+CrachFEM 4.2, which is based on an initial imperfection. Furthermore, the extended modified maximum force criterion considering the strain rate effect is also used to predict the FLC. These two models are calibrated by the quasi-static and dynamic uniaxial tensile tests and bulge tests. The predictions for the quasi-static and dynamic FLC by both models are presented and compared with the experimental results.

  1. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  2. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  3. Impact of relationships between test and training animals and among training animals on reliability of genomic prediction.

    Science.gov (United States)

    Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G

    2015-10-01

    One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.

  4. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).

  5. An intermittency model for predicting roughness induced transition

    Science.gov (United States)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  6. Advanced Models and Controls for Prediction and Extension of Battery Lifetime (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G.; Pesaran, A.

    2014-02-01

    Predictive models of capacity and power fade must consider a multiplicity of degradation modes experienced by Li-ion batteries in the automotive environment. Lacking accurate models and tests, lifetime uncertainty must presently be absorbed by overdesign and excess warranty costs. To reduce these costs and extend life, degradation models are under development that predict lifetime more accurately and with less test data. The lifetime models provide engineering feedback for cell, pack and system designs and are being incorporated into real-time control strategies.

  7. Using a Gravity Model to Predict Circulation in a Public Library System.

    Science.gov (United States)

    Ottensmann, John R.

    1995-01-01

    Describes the development of a gravity model based upon principles of spatial interaction to predict the circulation of libraries in the Indianapolis-Marion County Public Library (Indiana). The model effectively predicted past circulation figures and was tested by predicting future library circulation, particularly for a new branch library.…

  8. Creating a simulation model of software testing using Simulink package

    Directory of Open Access Journals (Sweden)

    V. M. Dubovoi

    2016-12-01

    Full Text Available The determination of the solution model of software testing that allows prediction both the whole process and its specific stages is actual for IT-industry. The article focuses on solving this problem. The aim of the article is prediction the time and improvement the quality of software testing. The analysis of the software testing process shows that it can be attributed to the branched cyclic technological processes because it is cyclical with decision-making on control operations. The investigation uses authors' previous works andsoftware testing process method based on Markov model. The proposed method enables execution the prediction for each software module, which leads to better decision-making of each controlled suboperation of all processes. Simulink simulation model shows implementation and verification of results of proposed technique. Results of the research have practically implemented in the IT-industry.

  9. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  11. Model Test Bed for Evaluating Wave Models and Best Practices for Resource Assessment and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Yang, Zhaoqing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Wang, Taiping [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Dallman, Ann Renee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies

    2016-03-01

    A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending on the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.

  12. Developing and Validating a Predictive Model for Stroke Progression

    Directory of Open Access Journals (Sweden)

    L.E. Craig

    2011-12-01

    Full Text Available Background: Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Methods: Two patient cohorts were used for this study – the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863 was used to develop the model. Variables that were statistically significant (p 0.1 in turn. The second cohort (n = 216 was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Results: Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72–0.73] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50–0.92]. Conclusion: The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the

  13. Developing and validating a predictive model for stroke progression.

    Science.gov (United States)

    Craig, L E; Wu, O; Gilmour, H; Barber, M; Langhorne, P

    2011-01-01

    Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Two patient cohorts were used for this study - the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p p > 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72-0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50-0.92)]. The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and calibration of the predictive model appear

  14. Developing and Validating a Predictive Model for Stroke Progression

    Science.gov (United States)

    Craig, L.E.; Wu, O.; Gilmour, H.; Barber, M.; Langhorne, P.

    2011-01-01

    Background Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Methods Two patient cohorts were used for this study – the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Results Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72–0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50–0.92)]. Conclusion The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and

  15. A prediction model for the grade of liver fibrosis using magnetic resonance elastography.

    Science.gov (United States)

    Mitsuka, Yusuke; Midorikawa, Yutaka; Abe, Hayato; Matsumoto, Naoki; Moriyama, Mitsuhiko; Haradome, Hiroki; Sugitani, Masahiko; Tsuji, Shingo; Takayama, Tadatoshi

    2017-11-28

    Liver stiffness measurement (LSM) has recently become available for assessment of liver fibrosis. We aimed to develop a prediction model for liver fibrosis using clinical variables, including LSM. We performed a prospective study to compare liver fibrosis grade with fibrosis score. LSM was measured using magnetic resonance elastography in 184 patients that underwent liver resection, and liver fibrosis grade was diagnosed histologically after surgery. Using the prediction model established in the training group, we validated the classification accuracy in the independent test group. First, we determined a cut-off value for stratifying fibrosis grade using LSM in 122 patients in the training group, and correctly diagnosed fibrosis grades of 62 patients in the test group with a total accuracy of 69.3%. Next, on least absolute shrinkage and selection operator analysis in the training group, LSM (r = 0.687, P prediction model. This prediction model applied to the test group correctly diagnosed 32 of 36 (88.8%) Grade I (F0 and F1) patients, 13 of 18 (72.2%) Grade II (F2 and F3) patients, and 7 of 8 (87.5%) Grade III (F4) patients in the test group, with a total accuracy of 83.8%. The prediction model based on LSM, ICGR15, and platelet count can accurately and reproducibly predict liver fibrosis grade.

  16. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test.

    Science.gov (United States)

    Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H

    2010-02-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method

  17. Validation of Quantitative Structure-Activity Relationship (QSAR Model for Photosensitizer Activity Prediction

    Directory of Open Access Journals (Sweden)

    Sharifuddin M. Zain

    2011-11-01

    Full Text Available Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA method. Based on the method, r2 value, r2 (CV value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 µM to 7.04 µM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  18. Prognostic durability of liver fibrosis tests and improvement in predictive performance for mortality by combining tests.

    Science.gov (United States)

    Bertrais, Sandrine; Boursier, Jérôme; Ducancelle, Alexandra; Oberti, Frédéric; Fouchard-Hubert, Isabelle; Moal, Valérie; Calès, Paul

    2017-06-01

    There is currently no recommended time interval between noninvasive fibrosis measurements for monitoring chronic liver diseases. We determined how long a single liver fibrosis evaluation may accurately predict mortality, and assessed whether combining tests improves prognostic performance. We included 1559 patients with chronic liver disease and available baseline liver stiffness measurement (LSM) by Fibroscan, aspartate aminotransferase to platelet ratio index (APRI), FIB-4, Hepascore, and FibroMeter V2G . Median follow-up was 2.8 years during which 262 (16.8%) patients died, with 115 liver-related deaths. All fibrosis tests were able to predict mortality, although APRI (and FIB-4 for liver-related mortality) showed lower overall discriminative ability than the other tests (differences in Harrell's C-index: P fibrosis, 1 year in patients with significant fibrosis, and liver disease (MELD) score testing sets. In the training set, blood tests and LSM were independent predictors of all-cause mortality. The best-fit multivariate model included age, sex, LSM, and FibroMeter V2G with C-index = 0.834 (95% confidence interval, 0.803-0.862). The prognostic model for liver-related mortality included the same covariates with C-index = 0.868 (0.831-0.902). In the testing set, the multivariate models had higher prognostic accuracy than FibroMeter V2G or LSM alone for all-cause mortality and FibroMeter V2G alone for liver-related mortality. The prognostic durability of a single baseline fibrosis evaluation depends on the liver fibrosis level. Combining LSM with a blood fibrosis test improves mortality risk assessment. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  19. A Model for Predicting Student Performance on High-Stakes Assessment

    Science.gov (United States)

    Dammann, Matthew Walter

    2010-01-01

    This research study examined the use of student achievement on reading and math state assessments to predict success on the science state assessment. Multiple regression analysis was utilized to test the prediction for all students in grades 5 and 8 in a mid-Atlantic state. The prediction model developed from the analysis explored the combined…

  20. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  1. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  2. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  3. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  4. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  5. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  6. Key Questions in Building Defect Prediction Models in Practice

    Science.gov (United States)

    Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas

    The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.

  7. Plant control using embedded predictive models

    International Nuclear Information System (INIS)

    Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.

    1990-01-01

    B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available

  8. Prediction of high level vibration test results by use of available inelastic analysis techniques

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Park, Y.J.; Costello, J.F.

    1991-01-01

    As part of a cooperative study between the United States and Japan, the US Nuclear Regulatory Commission and the Ministry of International Trade and Industry of Japan agreed to perform a test program that would subject a large scale piping model to significant plastic strains under excitation conditions much greater than the design condition for nuclear power plants. The objective was to compare the results of the tests with state-of-the-art analyses. Comparisons were done at different excitation levels from elastic to elastic-plastic to levels where cracking was induced in the test model. The program was called the high Level Vibration Test (HLVT). The HLVT was performed on the seismic table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center in Japan. The test model was constructed by modifying the 1/2.5 scale model of one loop of a PWR primary coolant system which was previously tested by NUPEC as part of their seismic proving test program. A comparison of various analysis techniques with test results shows a higher prediction error in the detailed strain values than in the overall response values. This prediction error is magnified as the plasticity in the test model increases. There is no significant difference in the peak responses between the simplified and the detailed analyses. A comparison between various detailed finite element model runs indicates that the material properties and plasticity modeling have a significant impact on the plastic strain responses under dynamic loading reversals. 5 refs., 12 figs

  9. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  10. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  11. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  12. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    Science.gov (United States)

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  13. Prediction of software operational reliability using testing environment factor

    International Nuclear Information System (INIS)

    Jung, Hoan Sung

    1995-02-01

    Software reliability is especially important to customers these days. The need to quantify software reliability of safety-critical systems has been received very special attention and the reliability is rated as one of software's most important attributes. Since the software is an intellectual product of human activity and since it is logically complex, the failures are inevitable. No standard models have been established to prove the correctness and to estimate the reliability of software systems by analysis and/or testing. For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is on the operational reliability rather than on the test reliability, however. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, testing environment factor comprising the aging factor and the coverage factor are defined in this work to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factor Test reliability can also be estimated with this approach without any model change. The application results are close to the actual data. The approach used in this thesis is expected to be applicable to ultra high reliable software systems that are used in nuclear power plants, airplanes, and other safety-critical applications

  14. Testing the utility of three social-cognitive models for predicting objective and self-report physical activity in adults with type 2 diabetes.

    Science.gov (United States)

    Plotnikoff, Ronald C; Lubans, David R; Penfold, Chris M; Courneya, Kerry S

    2014-05-01

    Theory-based interventions to promote physical activity (PA) are more effective than atheoretical approaches; however, the comparative utility of theoretical models is rarely tested in longitudinal designs with multiple time points. Further, there is limited research that has simultaneously tested social-cognitive models with self-report and objective PA measures. The primary aim of this study was to test the predictive ability of three theoretical models (social cognitive theory, theory of planned behaviour, and protection motivation theory) in explaining PA behaviour. Participants were adults with type 2 diabetes (n = 287, 53.8% males, mean age = 61.6 ± 11.8 years). Theoretical constructs across the three theories were tested to prospectively predict PA behaviour (objective and self-report) across three 6-month time intervals (baseline-6, 6-12, 12-18 months) using structural equation modelling. PA outcomes were steps/3 days (objective) and minutes of MET-weighted PA/week (self-report). The mean proportion of variance in PA explained by these models was 6.5% for objective PA and 8.8% for self-report PA. Direct pathways to PA outcomes were stronger for self-report compared with objective PA. These theories explained a small proportion of the variance in longitudinal PA studies. Theory development to guide interventions for increasing and maintaining PA in adults with type 2 diabetes requires further research with objective measures. Theory integration across social-cognitive models and the inclusion of ecological levels are recommended to further explain PA behaviour change in this population. Statement of contribution What is already known on this subject? Social-cognitive theories are able to explain partial variance for physical activity (PA) behaviour. What does this study add? The testing of three theories in a longitudinal design over 3, 6-month time intervals. The parallel use and comparison of both objective and self-report PA measures in testing these

  15. Predictive assessment of models for dynamic functional connectivity

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Schmidt, Mikkel Nørgaard; Madsen, Kristoffer Hougaard

    2018-01-01

    represent functional brain networks as a meta-stable process with a discrete number of states; however, there is a lack of consensus on how to perform model selection and learn the number of states, as well as a lack of understanding of how different modeling assumptions influence the estimated state......In neuroimaging, it has become evident that models of dynamic functional connectivity (dFC), which characterize how intrinsic brain organization changes over time, can provide a more detailed representation of brain function than traditional static analyses. Many dFC models in the literature...... dynamics. To address these issues, we consider a predictive likelihood approach to model assessment, where models are evaluated based on their predictive performance on held-out test data. Examining several prominent models of dFC (in their probabilistic formulations) we demonstrate our framework...

  16. Knowledge Prediction of Different Students’ Categories Trough an Intelligent Testing

    Directory of Open Access Journals (Sweden)

    Irina Zheliazkova

    2015-02-01

    Full Text Available Student’s modelling, prediction, and grouping have remained open research issues in the multi-disciplinary area of educational data mining. The purpose of this study is to predict the correct knowledge of different categories of tested students: good, very good, and all. The experimental data set was gathered from an intelligent post-test performance containing student’s correct, missing, and wrong knowledge, time undertaken, and final mark. The proposed procedure applies consequently correlation analysis, simple and multiple liner regression using a power specialized tool for programming by the teacher. The finding is that the accuracy of the procedure is satisfactory for the three students’ categories. The experiment also confirms some findings of other researchers and previous authors’ team studies.

  17. Testing proton spin models with polarized beams

    International Nuclear Information System (INIS)

    Ramsey, G.P.

    1991-01-01

    We review models for spin-weighted parton distributions in a proton. Sum rules involving the nonsinglet components of the structure function xg 1 p help narrow the range of parameters in these models. The contribution of the γ 5 anomaly term depends on the size of the integrated polarized gluon distribution and experimental predictions depend on its size. We have proposed three models for the polarized gluon distributions, whose range is considerable. These model distributions give an overall range is considerable. These model distributions give an overall range of parameters that can be tested with polarized beam experiments. These are discussed with regard to specific predictions for polarized beam experiments at energies typical of UNK

  18. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  19. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  20. In silico modeling to predict drug-induced phospholipidosis

    International Nuclear Information System (INIS)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa

    2013-01-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL

  1. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  2. Predicting and understanding law-making with word vectors and an ensemble model.

    Science.gov (United States)

    Nay, John J

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill's sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment.

  3. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    Science.gov (United States)

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Six-Tube Freezable Radiator Testing and Model Correlation

    Science.gov (United States)

    Lilibridge, Sean T.; Navarro, Moses

    2012-01-01

    Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.

  5. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  6. Empirical Modeling of Lithium-ion Batteries Based on Electrochemical Impedance Spectroscopy Tests

    International Nuclear Information System (INIS)

    Samadani, Ehsan; Farhad, Siamak; Scott, William; Mastali, Mehrdad; Gimenez, Leonardo E.; Fowler, Michael; Fraser, Roydon A.

    2015-01-01

    Highlights: • Two commercial Lithium-ion batteries are studied through HPPC and EIS tests. • An equivalent circuit model is developed for a range of operating conditions. • This model improves the current battery empirical models for vehicle applications • This model is proved to be efficient in terms of predicting HPPC test resistances. - ABSTRACT: An empirical model for commercial lithium-ion batteries is developed based on electrochemical impedance spectroscopy (EIS) tests. An equivalent circuit is established according to EIS test observations at various battery states of charge and temperatures. A Laplace transfer time based model is developed based on the circuit which can predict the battery operating output potential difference in battery electric and plug-in hybrid vehicles at various operating conditions. This model demonstrates up to 6% improvement compared to simple resistance and Thevenin models and is suitable for modeling and on-board controller purposes. Results also show that this model can be used to predict the battery internal resistance obtained from hybrid pulse power characterization (HPPC) tests to within 20 percent, making it suitable for low to medium fidelity powertrain design purposes. In total, this simple battery model can be employed as a real-time model in electrified vehicle battery management systems

  7. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  8. Verification of some numerical models for operationally predicting mesoscale winds aloft

    International Nuclear Information System (INIS)

    Cornett, J.S.; Randerson, D.

    1977-01-01

    Four numerical models are described for predicting mesoscale winds aloft for a 6 h period. These models are all tested statistically against persistence as the control forecast and against predictions made by operational forecasters. Mesoscale winds aloft data were used to initialize the models and to verify the predictions on an hourly basis. The model yielding the smallest root-mean-square vector errors (RMSVE's) was the one based on the most physics which included advection, ageostrophic acceleration, vertical mixing and friction. Horizontal advection was found to be the most important term in reducing the RMSVE's followed by ageostrophic acceleration, vertical advection, surface friction and vertical mixing. From a comparison of the mean absolute errors based on up to 72 independent wind-profile predictions made by operational forecasters, by the most complete model, and by persistence, we conclude that the model is the best wind predictor in the free air. In the boundary layer, the results tend to favor the forecaster for direction predictions. The speed predictions showed no overall superiority in any of these three models

  9. Robust block bootstrap panel predictability tests

    NARCIS (Netherlands)

    Westerlund, J.; Smeekes, S.

    2013-01-01

    Most panel data studies of the predictability of returns presume that the cross-sectional units are independent, an assumption that is not realistic. As a response to this, the current paper develops block bootstrap-based panel predictability tests that are valid under very general conditions. Some

  10. Development of a Skin Burn Predictive Model adapted to Laser Irradiation

    Science.gov (United States)

    Sonneck-Museux, N.; Scheer, E.; Perez, L.; Agay, D.; Autrique, L.

    2016-12-01

    Laser technology is increasingly used, and it is crucial for both safety and medical reasons that the impact of laser irradiation on human skin can be accurately predicted. This study is mainly focused on laser-skin interactions and potential lesions (burns). A mathematical model dedicated to heat transfers in skin exposed to infrared laser radiations has been developed. The model is validated by studying heat transfers in human skin and simultaneously performing experimentations an animal model (pig). For all experimental tests, pig's skin surface temperature is recorded. Three laser wavelengths have been tested: 808 nm, 1940 nm and 10 600 nm. The first is a diode laser producing radiation absorbed deep within the skin. The second wavelength has a more superficial effect. For the third wavelength, skin is an opaque material. The validity of the developed models is verified by comparison with experimental results (in vivo tests) and the results of previous studies reported in the literature. The comparison shows that the models accurately predict the burn degree caused by laser radiation over a wide range of conditions. The results show that the important parameter for burn prediction is the extinction coefficient. For the 1940 nm wavelength especially, significant differences between modeling results and literature have been observed, mainly due to this coefficient's value. This new model can be used as a predictive tool in order to estimate the amount of injury induced by several types (couple power-time) of laser aggressions on the arm, the face and on the palm of the hand.

  11. Predictive modelling of an excavation test in indurated clay

    International Nuclear Information System (INIS)

    Garitte, B.; Vaunat, J.; Gens, A.; Vietor, T.

    2010-01-01

    constitutive laws that describe the various phenomena under consideration. The density of the solid and the density of the water are dependent on the total mean pressure and the water pressure, respectively and the advective water flux is described by an anisotropic Darcy's law. Finally, the mechanical behaviour is modelled using an anisotropic linear elastic constitutive law that relates the medium deformation with the effective stress (σ'=σ-b.p w , where σ is total stress, b is the Biot coefficient and p w the water pressure). The simulated pore water pressure evolution is presented for two points at 1 m from the niche wall, in a transversal section at 16.5 m from the gallery centre (points B1 and B2) for 4 different computations. B1 lies in the direction of the major stress and B2 in the direction of the minor stress. F an is a 'full' anisotropic computation, considering anisotropy of stress, stiffness and permeability (E 1 = 1.33*E 2 = 4000 MPa). ISO is an isotropic simulation. In E an and in S an , all properties are considered isotropic except the mechanical constitutive law (E1 = 2*E2 = 6000 MPa) and the stress state, respectively. Long before the arrival of the gallery, ISO and S an do not predict significant pore water pressure changes in contrast to the computations that consider an anisotropic constitutive model. The changes are strongest in the case of F an and are consistent with observations in similar tests performed in Callovo-Oxfordian Clay. After the passage of the shaft, pore water pressure keeps increasing in the minor stress direction whereas it decreases in the major stress direction. A future stage of modelling must incorporate the reduction of stiffness and the increase in permeability cause by rock damage. The result will likely be a lower magnitude of the pore water pressure increments and an acceleration of drainage processes

  12. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and fault...... can be used as a test signal for sanity check at the commissioning or for detection of faults hidden by regulatory actions of the controller. The method is tested on the two tank benchmark example. ©2009 IEEE....

  13. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  14. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  15. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a

  16. FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.

  17. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  18. On-line test of power distribution prediction system for boiling water reactors

    International Nuclear Information System (INIS)

    Nishizawa, Y.; Kiguchi, T.; Kobayashi, S.; Takumi, K.; Tanaka, H.; Tsutsumi, R.; Yokomi, M.

    1982-01-01

    A power distribution prediction system for boiling water reactors has been developed and its on-line performance test has proceeded at an operating commercial reactor. This system predicts the power distribution or thermal margin in advance of control rod operations and core flow rate change. This system consists of an on-line computer system, an operator's console with a color cathode-ray tube, and plant data input devices. The main functions of this system are present power distribution monitoring, power distribution prediction, and power-up trajectory prediction. The calculation method is based on a simplified nuclear thermal-hydraulic calculation, which is combined with a method of model identification to the actual reactor core state. It has been ascertained by the on-line test that the predicted power distribution (readings of traversing in-core probe) agrees with the measured data within 6% root-mean-square. The computing time required for one prediction calculation step is less than or equal to 1.5 min by an HIDIC-80 on-line computer

  19. Shelf-life prediction models for ready-to-eat fresh cut salads: Testing in real cold chain.

    Science.gov (United States)

    Tsironi, Theofania; Dermesonlouoglou, Efimia; Giannoglou, Marianna; Gogou, Eleni; Katsaros, George; Taoukis, Petros

    2017-01-02

    The aim of the study was to develop and test the applicability of predictive models for shelf-life estimation of ready-to-eat (RTE) fresh cut salads in realistic distribution temperature conditions in the food supply chain. A systematic kinetic study of quality loss of RTE mixed salad (lollo rosso lettuce-40%, lollo verde lettuce-45%, rocket-15%) packed under modified atmospheres (3% O 2 , 10% CO 2 , 87% N 2 ) was conducted. Microbial population (total viable count, Pseudomonas spp., lactic acid bacteria), vitamin C, colour and texture were the measured quality parameters. Kinetic models for these indices were developed to determine the quality loss and calculate product remaining shelf-life (SL R ). Storage experiments were conducted at isothermal (2.5-15°C) and non-isothermal temperature conditions (T eff =7.8°C defined as the constant temperature that results in the same quality value as the variable temperature distribution) for validation purposes. Pseudomonas dominated spoilage, followed by browning and chemical changes. The end of shelf-life correlated with a Pseudomonas spp. level of 8 log(cfu/g), and 20% loss of the initial vitamin C content. The effect of temperature on these quality parameters was expressed by the Arrhenius equation; activation energy (E a ) value was 69.1 and 122.6kJ/mol for Pseudomonas spp. growth and vitamin C loss rates, respectively. Shelf-life prediction models were also validated in real cold chain conditions (including the stages of transport to and storage at retail distribution center, transport to and display at 7 retail stores, transport to and storage in domestic refrigerators). The quality level and SL R estimated after 2-3days of domestic storage (time of consumption) ranged between 1 and 8days at 4°C and was predicted within satisfactory statistical error by the kinetic models. T eff in the cold chain ranged between 3.7 and 8.3°C. Using the validated models, SL R of RTE fresh cut salad can be estimated at any point of

  20. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  1. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  2. Multivariate statistical models for disruption prediction at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Aledda, R.; Cannas, B.; Fanni, A.; Sias, G.; Pautasso, G.

    2013-01-01

    In this paper, a disruption prediction system for ASDEX Upgrade has been proposed that does not require disruption terminated experiments to be implemented. The system consists of a data-based model, which is built using only few input signals coming from successfully terminated pulses. A fault detection and isolation approach has been used, where the prediction is based on the analysis of the residuals of an auto regressive exogenous input model. The prediction performance of the proposed system is encouraging when it is applied to the same set of campaigns used to implement the model. However, the false alarms significantly increase when we tested the system on discharges coming from experimental campaigns temporally far from those used to train the model. This is due to the well know aging effect inherent in the data-based models. The main advantage of the proposed method, with respect to other data-based approaches in literature, is that it does not need data on experiments terminated with a disruption, as it uses a normal operating conditions model. This is a big advantage in the prospective of a prediction system for ITER, where a limited number of disruptions can be allowed

  3. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  4. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  5. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    International Nuclear Information System (INIS)

    Ould-Dada, Z.; Carini, F.; Eged, K.; Kis, Z.; Linkov, I.; Mitchell, N.G.; Mourlon, C.; Robles, B.; Sweeck, L.; Venter, A.

    2006-01-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of 134 Cs and 85 Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for 134 Cs, while differences for 85 Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise

  6. BEHAVE: fire behavior prediction and fuel modeling system--FUEL subsystem

    Science.gov (United States)

    Robert E. Burgan; Richard C. Rothermel

    1984-01-01

    This manual documents the fuel modeling procedures of BEHAVE--a state-of-the-art wildland fire behavior prediction system. Described are procedures for collecting fuel data, using the data with the program, and testing and adjusting the fuel model.

  7. Test-retest reliability and predictive validity of the Implicit Association Test in children.

    Science.gov (United States)

    Rae, James R; Olson, Kristina R

    2018-02-01

    The Implicit Association Test (IAT) is increasingly used in developmental research despite minimal evidence of whether children's IAT scores are reliable across time or predictive of behavior. When test-retest reliability and predictive validity have been assessed, the results have been mixed, and because these studies have differed on many factors simultaneously (lag-time between testing administrations, domain, etc.), it is difficult to discern what factors may explain variability in existing test-retest reliability and predictive validity estimates. Across five studies (total N = 519; ages 6- to 11-years-old), we manipulated two factors that have varied in previous developmental research-lag-time and domain. An internal meta-analysis of these studies revealed that, across three different methods of analyzing the data, mean test-retest (rs of .48, .38, and .34) and predictive validity (rs of .46, .20, and .10) effect sizes were significantly greater than zero. While lag-time did not moderate the magnitude of test-retest coefficients, whether we observed domain differences in test-retest reliability and predictive validity estimates was contingent on other factors, such as how we scored the IAT or whether we included estimates from a unique sample (i.e., a sample containing gender typical and gender diverse children). Recommendations are made for developmental researchers that utilize the IAT in their research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  9. Predicted and actual indoor environmental quality: Verification of occupants' behaviour models in residential buildings

    DEFF Research Database (Denmark)

    Andersen, Rune Korsholm; Fabi, Valentina; Corgnati, Stefano P.

    2016-01-01

    with the building controls (windows, thermostats, solar shading etc.). During the last decade, studies about stochastic models of occupants' behaviour in relation to control of the indoor environment have been published. Often the overall aim of these models is to enable more reliable predictions of building...... performance using building energy performance simulations (BEPS). However, the validity of these models has only been sparsely tested. In this paper, stochastic models of occupants' behaviour from literature were tested against measurements in five apartments. In a monitoring campaign, measurements of indoor....... However, comparisons of the average stochastic predictions with the measured temperatures, relative humidity and CO2 concentrations revealed that the models did not predict the actual indoor environmental conditions well....

  10. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  11. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    Science.gov (United States)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  12. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  13. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  14. Reaction times to weak test lights. [psychophysics biological model

    Science.gov (United States)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  15. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  16. Prediction Model for Relativistic Electrons at Geostationary Orbit

    Science.gov (United States)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  17. Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation

    Science.gov (United States)

    Peters, A.; Lantermann, U.; el Moctar, O.

    2015-12-01

    The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.

  18. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  19. RELAP5 Prediction of Transient Tests in the RD-14 Test Facility

    International Nuclear Information System (INIS)

    Lee, Sukho; Kim, Manwoong; Kim, Hho-Jung; Lee, John C.

    2005-01-01

    Although the RELAP5 computer code has been developed for best-estimate transient simulation of a pressurized water reactor and its associated systems, it could not assess the thermal-hydraulic behavior of a Canada deuterium uranium (CANDU) reactor adequately. However, some studies have been initiated to explore the applicability for simulating a large-break loss-of-coolant accident in CANDU reactors. In the present study, the small-reactor inlet header break test and the steam generator secondary-side depressurization test conducted in the RD-14 test facility were simulated with the RELAP5/MOD3.2.2 code to examine its extended capability for all the postulated transients and accidents in CANDU reactors. The results were compared with experimental data and those of the CATHENA code performed by Atomic Energy of Canada Limited.In the RELAP5 analyses, the heated sections in the facility were simulated as a multichannel with five pipe models, which have identical flow areas and hydraulic elevations, as well as a single-pipe model.The results of the small-reactor inlet header break and the steam generator secondary-side depressurization simulations predicted experimental data reasonably well. However, some discrepancies in the depressurization of the primary heat transport system after the header break and consequent time delay of the major phenomena were observed in the simulation of the small-reactor inlet header break test

  20. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  1. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  2. Combining multiple models to generate consensus: Application to radiation-induced pneumonitis prediction

    Energy Technology Data Exchange (ETDEWEB)

    Das, Shiva K.; Chen Shifeng; Deasy, Joseph O.; Zhou Sumin; Yin Fangfang; Marks, Lawrence B. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599 (United States)

    2008-11-15

    The fusion of predictions from disparate models has been used in several fields to obtain a more realistic and robust estimate of the ''ground truth'' by allowing the models to reinforce each other when consensus exists, or, conversely, negate each other when there is no consensus. Fusion has been shown to be most effective when the models have some complementary strengths arising from different approaches. In this work, we fuse the results from four common but methodologically different nonlinear multivariate models (Decision Trees, Neural Networks, Support Vector Machines, Self-Organizing Maps) that were trained to predict radiation-induced pneumonitis risk on a database of 219 lung cancer patients treated with radiotherapy (34 with Grade 2+ postradiotherapy pneumonitis). Each model independently incorporated a small number of features from the available set of dose and nondose patient variables to predict pneumonitis; no two models had all features in common. Fusion was achieved by simple averaging of the predictions for each patient from all four models. Since a model's prediction for a patient can be dependent on the patient training set used to build the model, the average of several different predictions from each model was used in the fusion (predictions were made by repeatedly testing each patient with a model built from different cross-validation training sets that excluded the patient being tested). The area under the receiver operating characteristics curve for the fused cross-validated results was 0.79, with lower variance than the individual component models. From the fusion, five features were extracted as the consensus among all four models in predicting radiation pneumonitis. Arranged in order of importance, the features are (1) chemotherapy; (2) equivalent uniform dose (EUD) for exponent a=1.2 to 3; (3) EUD for a=0.5 to 1.2, lung volume receiving >20-30 Gy; (4) female sex; and (5) squamous cell histology. To facilitate

  3. A disaggregate model to predict the intercity travel demand

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, S.

    1988-01-01

    This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.

  4. Integrated predictive modelling simulations of burning plasma experiment designs

    International Nuclear Information System (INIS)

    Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H

    2003-01-01

    Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied

  5. Predictive modeling of mosquito abundance and dengue transmission in Kenya

    Science.gov (United States)

    Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.

    2017-12-01

    Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.

  6. Functional test of pedotransfer functions to predict water flow and solute transport with the dual-permeability model MACRO

    Directory of Open Access Journals (Sweden)

    J. Moeys

    2012-07-01

    Full Text Available Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedotransfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved.

    Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42. Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = −0.26 due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72. Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is

  7. Functional test of pedotransfer functions to predict water flow and solute transport with the dual-permeability model MACRO

    Science.gov (United States)

    Moeys, J.; Larsbo, M.; Bergström, L.; Brown, C. D.; Coquet, Y.; Jarvis, N. J.

    2012-07-01

    Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedo)transfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved. Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42). Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = -0.26) due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72). Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is probably more important than the

  8. Early experiences building a software quality prediction model

    Science.gov (United States)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  9. [Prediction of 137Cs accumulation in animal products in the territory of Semipalatinsk test site].

    Science.gov (United States)

    Spiridonov, S I; Gontarenko, I A; Mukusheva, M K; Fesenko, S V; Semioshkina, N A

    2005-01-01

    The paper describes mathematical models for 137Cs behavior in the organism of horses and sheep pasturing on the bording area to the testing area "Ground Zero" of the Semipalatinsk Test Site. The models are parameterized on the base of the data from an experiment with the breeds of animals now commonly encountered within the Semipalatinsk Test Site. The predictive calculations with the models devised have shown that 137Cs concentrations in milk of horses and sheep pasturingon the testing area to "Ground Zero" can exceed the adopted standards during a long period of time.

  10. Prediction of psychological functioning one year after the predictive test for Huntington's disease and impact of the test result on reproductive decision making.

    Science.gov (United States)

    Decruyenaere, M; Evers-Kiebooms, G; Boogaerts, A; Cassiman, J J; Cloostermans, T; Demyttenaere, K; Dom, R; Fryns, J P; Van den Berghe, H

    1996-01-01

    For people at risk for Huntington's disease, the anxiety and uncertainty about the future may be very burdensome and may be an obstacle to personal decision making about important life issues, for example, procreation. For some at risk persons, this situation is the reason for requesting predictive DNA testing. The aim of this paper is two-fold. First, we want to evaluate whether knowing one's carrier status reduces anxiety and uncertainty and whether it facilitates decision making about procreation. Second, we endeavour to identify pretest predictors of psychological adaptation one year after the predictive test (psychometric evaluation of general anxiety, depression level, and ego strength). The impact of the predictive test result was assessed in 53 subjects tested, using pre- and post-test psychometric measurement and self-report data of follow up interviews. Mean anxiety and depression levels were significantly decreased one year after a good test result; there was no significant change in the case of a bad test result. The mean personality profile, including ego strength, remained unchanged one year after the test. The study further shows that the test result had a definite impact on reproductive decision making. Stepwise multiple regression analyses were used to select the best predictors of the subject's post-test reactions. The results indicate that a careful evaluation of pretest ego strength, depression level, and coping strategies may be helpful in predicting post-test reactions, independently of the carrier status. Test result (carrier/ non-carrier), gender, and age did not significantly contribute to the prediction. About one third of the variance of post-test anxiety and depression level and more than half of the variance of ego strength was explained, implying that other psychological or social aspects should also be taken into account when predicting individual post-test reactions. PMID:8880572

  11. How to test for partially predictable chaos.

    Science.gov (United States)

    Wernecke, Hendrik; Sándor, Bulcsú; Gros, Claudius

    2017-04-24

    For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation can split into an initial exponential decrease and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. Both processes can be either of the same or of very different time scales. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and remain partially predictable. Standard tests for chaos widely use inter-orbital correlations as an indicator. However, testing partially predictable chaos yields mostly ambiguous results, as this type of chaos is characterized by attractors of fractally broadened braids. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories. This test robustly discriminates chaos, including partially predictable chaos, from laminar flow. Additionally using the finite time cross-correlation of pairs of initially close trajectories, we are able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.

  12. Modeling and Prediction of Soil Water Vapor Sorption Isotherms

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per

    2015-01-01

    Soil water vapor sorption isotherms describe the relationship between water activity (aw) and moisture content along adsorption and desorption paths. The isotherms are important for modeling numerous soil processes and are also used to estimate several soil (specific surface area, clay content.......93) for a wide range of soils; and (ii) develop and test regression models for estimating the isotherms from clay content. Preliminary results show reasonable fits of the majority of the investigated empirical and theoretical models to the measured data although some models were not capable to fit both sorption...... directions accurately. Evaluation of the developed prediction equations showed good estimation of the sorption/desorption isotherms for tested soils....

  13. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  14. Prediction of the Individual Wave Overtopping Volumes of a Wave Energy Converter using Experimental Testing and First Numerical Model Results

    DEFF Research Database (Denmark)

    Victor, L.; Troch, P.; Kofoed, Jens Peter

    2009-01-01

    For overtopping wave energy converters (WECs) a more efficient energy conversion can be achieved when the volumes of water, wave by wave, that enter their reservoir are known and can be predicted. A numerical tool is being developed using a commercial CFD-solver to study and optimize...... nearshore 2Dstructure. First numerical model results are given for a specific test with regular waves, and are compared with the corresponding experimental results in this paper....

  15. Testing predictive models of positive and negative affect with psychosocial, acculturation, and coping variables in a multiethnic undergraduate sample.

    Science.gov (United States)

    Kuo, Ben Ch; Kwantes, Catherine T

    2014-01-01

    Despite the prevalence and popularity of research on positive and negative affect within the field of psychology, there is currently little research on affect involving the examination of cultural variables and with participants of diverse cultural and ethnic backgrounds. To the authors' knowledge, currently no empirical studies have comprehensively examined predictive models of positive and negative affect based specifically on multiple psychosocial, acculturation, and coping variables as predictors with any sample populations. Therefore, the purpose of the present study was to test the predictive power of perceived stress, social support, bidirectional acculturation (i.e., Canadian acculturation and heritage acculturation), religious coping and cultural coping (i.e., collective, avoidance, and engagement coping) in explaining positive and negative affect in a multiethnic sample of 301 undergraduate students in Canada. Two hierarchal multiple regressions were conducted, one for each affect as the dependent variable, with the above described predictors. The results supported the hypotheses and showed the two overall models to be significant in predicting affect of both kinds. Specifically, a higher level of positive affect was predicted by a lower level of perceived stress, less use of religious coping, and more use of engagement coping in dealing with stress by the participants. Higher level of negative affect, however, was predicted by a higher level of perceived stress and more use of avoidance coping in responding to stress. The current findings highlight the value and relevance of empirically examining the stress-coping-adaptation experiences of diverse populations from an affective conceptual framework, particularly with the inclusion of positive affect. Implications and recommendations for advancing future research and theoretical works in this area are considered and presented.

  16. The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory

    Science.gov (United States)

    Anil, Duygu

    2008-01-01

    In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…

  17. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    Science.gov (United States)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  19. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  20. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  1. Predicting adherence to combination antiretroviral therapy for HIV in Tanzania: A test of an extended theory of planned behaviour model.

    Science.gov (United States)

    Banas, Kasia; Lyimo, Ramsey A; Hospers, Harm J; van der Ven, Andre; de Bruin, Marijn

    2017-10-01

    Combination antiretroviral therapy (cART) for HIV is widely available in sub-Saharan Africa. Adherence is crucial to successful treatment. This study aimed to apply an extended theory of planned behaviour (TPB) model to predict objectively measured adherence to cART in Tanzania. Prospective observational study (n = 158) where patients completed questionnaires on demographics (Month 0), socio-cognitive variables including intentions (Month 1), and action planning and self-regulatory processes hypothesised to mediate the intention-behaviour relationship (Month 3), to predict adherence (Month 5). Taking adherence was measured objectively using the Medication Events Monitoring System (MEMS) caps. Model tests were conducted using regression and bootstrap mediation analyses. Perceived behavioural control (PBC) was positively (β = .767, p behavioural measure, identified PBC as the main driver of adherence intentions. The effect of intentions on adherence was only indirect through self-regulatory processes, which were the main predictor of objectively assessed adherence.

  2. Validation test for CAP88 predictions of tritium dispersion at Los Alamos National Laboratory.

    Science.gov (United States)

    Michelotti, Erika; Green, Andrew; Whicker, Jeffrey; Eisele, William; Fuehne, David; McNaughton, Michael

    2013-08-01

    Gaussian plume models, such as CAP88, are used regularly for estimating downwind concentrations from stack emissions. At many facilities, the U.S. Environmental Protection Agency (U.S. EPA) requires that CAP88 be used to demonstrate compliance with air quality regulations for public protection from emissions of radionuclides. Gaussian plume models have the advantage of being relatively simple and their use pragmatic; however, these models are based on simplifying assumptions and generally they are not capable of incorporating dynamic meteorological conditions or complex topography. These limitations encourage validation tests to understand the capabilities and limitations of the model for the specific application. Los Alamos National Laboratory (LANL) has complex topography but is required to use CAP88 for compliance with the Clean Air Act Subpart H. The purpose of this study was to test the accuracy of the CAP88 predictions against ambient air measurements using released tritium as a tracer. Stack emissions of tritium from two LANL stacks were measured and the dispersion modeled with CAP88 using local meteorology. Ambient air measurements of tritium were made at various distances and directions from the stacks. Model predictions and ambient air measurements were compared over the course of a full year's data. Comparative results were consistent with other studies and showed the CAP88 predictions of downwind tritium concentrations were on average about three times higher than those measured, and the accuracy of the model predictions were generally more consistent for annual averages than for bi-weekly data.

  3. Damage assessment of low-cycle fatigue by crack growth prediction. Development of growth prediction model and its application

    International Nuclear Information System (INIS)

    Kamaya, Masayuki; Kawakubo, Masahiro

    2012-01-01

    In this study, the fatigue damage was assumed to be equivalent to the crack initiation and its growth, and fatigue life was assessed by predicting the crack growth. First, a low-cycle fatigue test was conducted in air at room temperature under constant cyclic strain range of 1.2%. The crack initiation and change in crack size during the test were examined by replica investigation. It was found that a crack of 41.2 μm length was initiated almost at the beginning of the test. The identified crack growth rate was shown to correlate well with the strain intensity factor, whose physical meaning was discussed in this study. The fatigue life prediction model (equation) under constant strain range was derived by integrating the crack growth equation defined using the strain intensity factor, and the predicted fatigue lives were almost identical to those obtained by low-cycle fatigue tests. The change in crack depth predicted by the equation also agreed well with the experimental results. Based on the crack growth prediction model, it was shown that the crack size would be less than 0.1 mm even when the estimated fatigue damage exceeded the critical value of the design fatigue curve, in which a twenty-fold safety margin was used for the assessment. It was revealed that the effect of component size and surface roughness, which have been investigated empirically by fatigue tests, could be reasonably explained by considering the crack initiation and growth. Furthermore, the environmental effect on the fatigue life was shown to be brought about by the acceleration of crack growth. (author)

  4. QSAR Modeling and Prediction of Drug-Drug Interactions.

    Science.gov (United States)

    Zakharov, Alexey V; Varlamova, Ekaterina V; Lagunin, Alexey A; Dmitriev, Alexander V; Muratov, Eugene N; Fourches, Denis; Kuz'min, Victor E; Poroikov, Vladimir V; Tropsha, Alexander; Nicklaus, Marc C

    2016-02-01

    Severe adverse drug reactions (ADRs) are the fourth leading cause of fatality in the U.S. with more than 100,000 deaths per year. As up to 30% of all ADRs are believed to be caused by drug-drug interactions (DDIs), typically mediated by cytochrome P450s, possibilities to predict DDIs from existing knowledge are important. We collected data from public sources on 1485, 2628, 4371, and 27,966 possible DDIs mediated by four cytochrome P450 isoforms 1A2, 2C9, 2D6, and 3A4 for 55, 73, 94, and 237 drugs, respectively. For each of these data sets, we developed and validated QSAR models for the prediction of DDIs. As a unique feature of our approach, the interacting drug pairs were represented as binary chemical mixtures in a 1:1 ratio. We used two types of chemical descriptors: quantitative neighborhoods of atoms (QNA) and simplex descriptors. Radial basis functions with self-consistent regression (RBF-SCR) and random forest (RF) were utilized to build QSAR models predicting the likelihood of DDIs for any pair of drug molecules. Our models showed balanced accuracy of 72-79% for the external test sets with a coverage of 81.36-100% when a conservative threshold for the model's applicability domain was applied. We generated virtually all possible binary combinations of marketed drugs and employed our models to identify drug pairs predicted to be instances of DDI. More than 4500 of these predicted DDIs that were not found in our training sets were confirmed by data from the DrugBank database.

  5. On determining the prediction limits of mathematical models for time series

    International Nuclear Information System (INIS)

    Peluso, E.; Gelfusa, M.; Lungaroni, M.; Talebzadeh, S.; Gaudio, P.; Murari, A.; Contributors, JET

    2016-01-01

    Prediction is one of the main objectives of scientific analysis and it refers to both modelling and forecasting. The determination of the limits of predictability is an important issue of both theoretical and practical relevance. In the case of modelling time series, reached a certain level in performance in either modelling or prediction, it is often important to assess whether all the information available in the data has been exploited or whether there are still margins for improvement of the tools being developed. In this paper, an information theoretic approach is proposed to address this issue and quantify the quality of the models and/or predictions. The excellent properties of the proposed indicator have been proved with the help of a systematic series of numerical tests and a concrete example of extreme relevance for nuclear fusion.

  6. One or two serological assay testing strategy for diagnosis of HBV and HCV infection? The use of predictive modelling.

    Science.gov (United States)

    Parry, John V; Easterbrook, Philippa; Sands, Anita R

    2017-11-01

    Initial serological testing for chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infection is conducted using either rapid diagnostic tests (RDT) or laboratory-based enzyme immunoassays (EIA)s for detection of hepatitis B surface antigen (HBsAg) or antibodies to HCV (anti-HCV), typically on serum or plasma specimens and, for certain RDTs, capillary whole blood. WHO recommends the use of standardized testing strategies - defined as a sequence of one or more assays to maximize testing accuracy while simplifying the testing process and ideally minimizing cost. Our objective was to examine the diagnostic outcomes of a one- versus two-assay serological testing strategy. These data were used to inform recommendations in the 2017 WHO Guidelines on hepatitis B and C testing. Few published studies have compared diagnostic outcomes for one-assay versus two-assay serological testing strategies for HBsAg and anti-HCV. Therefore, the principles of Bayesian statistics were used to conduct a modelling exercise to examine the outcomes of a one-assay versus two-assay testing strategy when applied to a hypothetical population of 10,000 individuals. The resulting model examined the diagnostic outcomes (true and false positive diagnoses; true and false negative diagnoses; positive and negative predictive values as a function of prevalence; and total tests required) for both one-assay and two-assay testing strategies. The performance characteristics assumed for assays used within the testing strategies were informed by WHO prequalification assessment findings and systematic reviews for diagnostic accuracy studies. Each of the presumptive testing strategies (one-assay or two-assay) was modelled at varying prevalences of HBsAg (10%, 2% and 0.4%) and of anti-HCV (40%, 10%, 2% and 0.4%), aimed at representing the range of testing populations typically encountered in WHO Member States. When the two-assay testing strategy was considered, the model assumed the independence of the

  7. Test-Retest Reliability and Predictive Validity of the Implicit Association Test in Children

    Science.gov (United States)

    Rae, James R.; Olson, Kristina R.

    2018-01-01

    The Implicit Association Test (IAT) is increasingly used in developmental research despite minimal evidence of whether children's IAT scores are reliable across time or predictive of behavior. When test-retest reliability and predictive validity have been assessed, the results have been mixed, and because these studies have differed on many…

  8. Comparison of Critical Flow Models' Evaluations for SBLOCA Tests

    International Nuclear Information System (INIS)

    Kim, Yeon Sik; Park, Hyun Sik; Cho, Seok

    2016-01-01

    A comparison of critical flow models between the Trapp-Ransom and Henry-Fauske models for all SBLOCA (small break loss of coolant accident) scenarios of the ATLAS (Advanced thermal-hydraulic test loop for accident simulation) facility was performed using the MARS-KS code. For the comparison of the two critical models, the accumulated break mass was selected as the main parameter for the comparison between the analyses and tests. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL (cold leg) break and 25%, 50%, and 100% DVI (direct vessel injection) breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR (pressurizer) pressure and collapsed core water level, were also compared between the two critical models. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL break and 25%, 50%, and 100% DVI breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR pressure and collapsed core water level, were also compared between the two critical models. From the comparison between the two critical models for the CL breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 2', 6', and 8.5' CL breaks. In addition, from the comparison between the two critical models for the DVI breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 5%, 50%, and 100% DVI breaks. In the case of the 50% and 100% breaks, the two critical models predicted the test data quite well.

  9. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    Science.gov (United States)

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  10. Building and verifying a severity prediction model of acute pancreatitis (AP) based on BISAP, MEWS and routine test indexes.

    Science.gov (United States)

    Ye, Jiang-Feng; Zhao, Yu-Xin; Ju, Jian; Wang, Wei

    2017-10-01

    To discuss the value of the Bedside Index for Severity in Acute Pancreatitis (BISAP), Modified Early Warning Score (MEWS), serum Ca2+, similarly hereinafter, and red cell distribution width (RDW) for predicting the severity grade of acute pancreatitis and to develop and verify a more accurate scoring system to predict the severity of AP. In 302 patients with AP, we calculated BISAP and MEWS scores and conducted regression analyses on the relationships of BISAP scoring, RDW, MEWS, and serum Ca2+ with the severity of AP using single-factor logistics. The variables with statistical significance in the single-factor logistic regression were used in a multi-factor logistic regression model; forward stepwise regression was used to screen variables and build a multi-factor prediction model. A receiver operating characteristic curve (ROC curve) was constructed, and the significance of multi- and single-factor prediction models in predicting the severity of AP using the area under the ROC curve (AUC) was evaluated. The internal validity of the model was verified through bootstrapping. Among 302 patients with AP, 209 had mild acute pancreatitis (MAP) and 93 had severe acute pancreatitis (SAP). According to single-factor logistic regression analysis, we found that BISAP, MEWS and serum Ca2+ are prediction indexes of the severity of AP (P-value0.05). The multi-factor logistic regression analysis showed that BISAP and serum Ca2+ are independent prediction indexes of AP severity (P-value0.05); BISAP is negatively related to serum Ca2+ (r=-0.330, P-valuemodel is as follows: ln()=7.306+1.151*BISAP-4.516*serum Ca2+. The predictive ability of each model for SAP follows the order of the combined BISAP and serum Ca2+ prediction model>Ca2+>BISAP. There is no statistical significance for the predictive ability of BISAP and serum Ca2+ (P-value>0.05); however, there is remarkable statistical significance for the predictive ability using the newly built prediction model as well as BISAP

  11. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  12. Performance prediction of a proton exchange membrane fuel cell using the ANFIS model

    Energy Technology Data Exchange (ETDEWEB)

    Vural, Yasemin; Ingham, Derek B.; Pourkashanian, Mohamed [Centre for Computational Fluid Dynamics, University of Leeds, Houldsworth Building, LS2 9JT Leeds (United Kingdom)

    2009-11-15

    In this study, the performance (current-voltage curve) prediction of a Proton Exchange Membrane Fuel Cell (PEMFC) is performed for different operational conditions using an Adaptive Neuro-Fuzzy Inference System (ANFIS). First, ANFIS is trained with a set of input and output data. The trained model is then tested with an independent set of experimental data. The trained and tested model is then used to predict the performance curve of the PEMFC under various operational conditions. The model shows very good agreement with the experimental data and this indicates that ANFIS is capable of predicting fuel cell performance (in terms of cell voltage) with a high accuracy in an easy, rapid and cost effective way for the case presented. Finally, the capabilities and the limitations of the model for the application in fuel cells have been discussed. (author)

  13. A wave model test bed study for wave energy resource characterization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaoqing; Neary, Vincent S.; Wang, Taiping; Gunawan, Budi; Dallman, Annie R.; Wu, Wei-Cheng

    2017-12-01

    This paper presents a test bed study conducted to evaluate best practices in wave modeling to characterize energy resources. The model test bed off the central Oregon Coast was selected because of the high wave energy and available measured data at the site. Two third-generation spectral wave models, SWAN and WWIII, were evaluated. A four-level nested-grid approach—from global to test bed scale—was employed. Model skills were assessed using a set of model performance metrics based on comparing six simulated wave resource parameters to observations from a wave buoy inside the test bed. Both WWIII and SWAN performed well at the test bed site and exhibited similar modeling skills. The ST4 package with WWIII, which represents better physics for wave growth and dissipation, out-performed ST2 physics and improved wave power density and significant wave height predictions. However, ST4 physics tended to overpredict the wave energy period. The newly developed ST6 physics did not improve the overall model skill for predicting the six wave resource parameters. Sensitivity analysis using different wave frequencies and direction resolutions indicated the model results were not sensitive to spectral resolutions at the test bed site, likely due to the absence of complex bathymetric and geometric features.

  14. Machine learning modelling for predicting soil liquefaction susceptibility

    Directory of Open Access Journals (Sweden)

    P. Samui

    2011-01-01

    Full Text Available This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN based on multi-layer perceptions (MLP that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N160] and cyclic stress ratio (CSR. Further, an attempt has been made to simplify the models, requiring only the two parameters [(N160 and peck ground acceleration (amax/g], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

  15. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  16. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  17. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    Science.gov (United States)

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  18. Memory Binding Test Predicts Incident Amnestic Mild Cognitive Impairment.

    Science.gov (United States)

    Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman

    2016-07-14

    The Memory Binding Test (MBT), previously known as Memory Capacity Test, has demonstrated discriminative validity for distinguishing persons with amnestic mild cognitive impairment (aMCI) and dementia from cognitively normal elderly. We aimed to assess the predictive validity of the MBT for incident aMCI. In a longitudinal, community-based study of adults aged 70+, we administered the MBT to 246 cognitively normal elderly adults at baseline and followed them annually. Based on previous work, a subtle reduction in memory binding at baseline was defined by a Total Items in the Paired (TIP) condition score of ≤22 on the MBT. Cox proportional hazards models were used to assess the predictive validity of the MBT for incident aMCI accounting for the effects of covariates. The hazard ratio of incident aMCI was also assessed for different prediction time windows ranging from 4 to 7 years of follow-up, separately. Among 246 controls who were cognitively normal at baseline, 48 developed incident aMCI during follow-up. A baseline MBT reduction was associated with an increased risk for developing incident aMCI (hazard ratio (HR) = 2.44, 95% confidence interval: 1.30-4.56, p = 0.005). When varying the prediction window from 4-7 years, the MBT reduction remained significant for predicting incident aMCI (HR range: 2.33-3.12, p: 0.0007-0.04). Persons with poor performance on the MBT are at significantly greater risk for developing incident aMCI. High hazard ratios up to seven years of follow-up suggest that the MBT is sensitive to early disease.

  19. HPV-testing versus HPV-cytology co-testing to predict the outcome after conization.

    Science.gov (United States)

    Bruhn, Laerke Valsøe; Andersen, Sisse Josephine; Hariri, Jalil

    2018-06-01

    The purpose of this study was to determine the feasibility of human Papillomavirus (HPV) testing alone as a prognostic tool to predict recurrent disease within a three-year follow-up period after treatment for cervical intraepithelial neoplasia (CIN)2 + . Retrospectively, 128 women with histologically verified CIN2 + who had a conization performed at Southern Jutland Hospital in Denmark between 1 January 2013 and 31 December 2013 were included. Histology, cytology and HPV test results were obtained for a three-year follow-up period. 4.7% (6/128) of the cases developed recurrent disease during follow-up. Of the cases without free margins, recurrent dysplasia was detected normal in 10.4% (5/48), whereas in the group with free margins it was 1.3% (1/80). The post-conization HPV test was negative in 67.2% (86/128) and Pap smear normal in 93.7% (120/128). Combining resection margins, cytology and HPV had sensitivity for prediction of recurrent dysplasia of 100%. Specificity was 45.8%, positive predictive value (PPV) 8.5% and negative predictive value (NPV) 100%. Using HPV test alone as a predictor of recurrent dysplasia gave a sensitivity of 83.3%, specificity 69.7%, PPV 11.9% and NPV 98.8%. Combining resection margin and HPV test had a sensitivity of 100%, specificity 45.9%, PPV 8.3% and NPV 100%. HPV test at six months control post-conization gave an NPV of 98.8% and can be used as a solitary test to identify women at risk for recurrent disease three years after treatment for precursor lesions. Using both resection margin and HPV test had a sensitivity of 100% and NPV 100%. Adding cytology did not increase the predictive value. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  20. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  1. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  2. Babcock and Wilcox model for predicting in-reactor densification

    International Nuclear Information System (INIS)

    Buescher, B.J.; Pegram, J.W.

    1975-06-01

    The B and W fuel densification model is used to describe the extent and kinetics of in-reactor densification in B and W production fuel. The model and approach are qualified against an extensive data base available through B and W's participation in the EEI Fuel Densification Program. Out-of-reactor resintering tests on representative pellets from each batch of fuel are used to provide input parameters to the B and W densification model. The B and W densification model predicts in-reactor densification very accurately for pellets operated at heat rates above 5 kW/ft and with considerable conservation for pellets operated at heat rates less than 5 kW/ft. This model represents a technically rigorous and conservative basis for predicting the extent and kinetics of in-reactor densification. 9 references. (U.S.)

  3. A risk prediction model for xerostomia: a retrospective cohort study.

    Science.gov (United States)

    Villa, Alessandro; Nordio, Francesco; Gohel, Anita

    2016-12-01

    We investigated the prevalence of xerostomia in dental patients and built a xerostomia risk prediction model by incorporating a wide range of risk factors. Socio-demographic data, past medical history, self-reported dry mouth and related symptoms were collected retrospectively from January 2010 to September 2013 for all new dental patients. A logistic regression framework was used to build a risk prediction model for xerostomia. External validation was performed using an independent data set to test the prediction power. A total of 12 682 patients were included in this analysis (54.3%, females). Xerostomia was reported by 12.2% of patients. The proportion of people reporting xerostomia was higher among those who were taking more medications (OR = 1.11, 95% CI = 1.08-1.13) or recreational drug users (OR = 1.4, 95% CI = 1.1-1.9). Rheumatic diseases (OR = 2.17, 95% CI = 1.88-2.51), psychiatric diseases (OR = 2.34, 95% CI = 2.05-2.68), eating disorders (OR = 2.28, 95% CI = 1.55-3.36) and radiotherapy (OR = 2.00, 95% CI = 1.43-2.80) were good predictors of xerostomia. For the test model performance, the ROC-AUC was 0.816 and in the external validation sample, the ROC-AUC was 0.799. The xerostomia risk prediction model had high accuracy and discriminated between high- and low-risk individuals. Clinicians could use this model to identify the classes of medications and systemic diseases associated with xerostomia. © 2015 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  4. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  5. Testing projected wild bee distributions in agricultural habitats: predictive power depends on species traits and habitat type.

    Science.gov (United States)

    Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C

    2015-10-01

    Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and

  6. A blind test on the pulse tube refrigerator model (PTRM)

    International Nuclear Information System (INIS)

    Yuan, S.W.K.; Radebaugh, R.

    1996-01-01

    The Stirling Refrigerator Performance Model (SRPM) has been validated extensively against the Lockheed built Stirling Coolers and various units in the literature. This model has been modified to predict the performance of the Pulse Tube Coolers (PTCs). It was successfully validated against a Lockheed in-house-built PTC. The results are to be published elsewhere. In this paper, the validation of PTRM against a NIST (National Institute of Standards and Technology) orifice pulse tube cooler is reported. Dimensions and operating condition of the PTC were obtained from NIST without prior knowledge of the performance. In other words, this is a open-quote blind test close-quote on the PTRM with the help of the National Institute of Standards and Technology. Good correlation was found between the test data and the prediction. PTRM is a generic model that gives accurate performance prediction of the pulse tube coolers

  7. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    Science.gov (United States)

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in

  8. Radiative Heating in MSL Entry: Comparison of Flight Heating Discrepancy to Ground Test and Predictive Models

    Science.gov (United States)

    Cruden, Brett A.; Brandis, Aaron M.; White, Todd R.; Mahzari, Milad; Bose, Deepak

    2014-01-01

    During the recent entry of the Mars Science Laboratory (MSL), the heat shield was equipped with thermocouple stacks to measure in-depth heating of the thermal protection system (TPS). When only convective heating was considered, the derived heat flux from gauges in the stagnation region was found to be underpredicted by as much as 17 W/sq cm, which is significant compared to the peak heating of 32 W/sq cm. In order to quantify the contribution of radiative heating phenomena to the discrepancy, ground tests and predictive simulations that replicated the MSL entry trajectory were performed. An analysis is carried through to assess the quality of the radiation model and the impact to stagnation line heating. The impact is shown to be significant, but does not fully explain the heating discrepancy.

  9. Probabilistic fatigue life prediction methodology for notched components based on simple smooth fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z. R.; Li, Z. X. [Dept.of Engineering Mechanics, Jiangsu Key Laboratory of Engineering Mechanics, Southeast University, Nanjing (China); Hu, X. T.; Xin, P. P.; Song, Y. D. [State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing (China)

    2017-01-15

    The methodology of probabilistic fatigue life prediction for notched components based on smooth specimens is presented. Weakestlink theory incorporating Walker strain model has been utilized in this approach. The effects of stress ratio and stress gradient have been considered. Weibull distribution and median rank estimator are used to describe fatigue statistics. Fatigue tests under different stress ratios were conducted on smooth and notched specimens of titanium alloy TC-1-1. The proposed procedures were checked against the test data of TC-1-1 notched specimens. Prediction results of 50 % survival rate are all within a factor of two scatter band of the test results.

  10. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  11. Predictive model for convective flows induced by surface reactivity contrast

    Science.gov (United States)

    Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali

    2018-05-01

    Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.

  12. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    Science.gov (United States)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  13. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  14. Numerical modeling of the Near Surface Test Facility No. 1 and No. 2 heater tests

    International Nuclear Information System (INIS)

    Hocking, G.; Williams, J.; Boonlualohr, P.; Mathews, I.; Mustoe, G.

    1981-01-01

    Thermomechanical predictive calculations have been undertaken for two full scale heater tests No. 1 and No. 2 at the Near Surface Test Facility (NSTF) at Hanford, Washington. Numerical predictions were made of the basaltic rock response involving temperatures, displacements, strains and stresses due to energizing the electrical heaters. The basalt rock mass was modeled as an isotropic thermal material but with temperature dependent thermal conductivity, specific heat and thermal expansion. The fractured nature of the basalt necessitated that it be modeled as a cross anisotropic medium with a bi-linear locking stress strain relationship. The cross-anisotropic idealization was selected after characterization studies indicated that a vertical columnar structure persisted throughout the test area and no major throughgoing discontinuities were present. The deformational properties were determined from fracture frequency and orientation, joint deformational data, Goodman Jack results and two rock mass classification schemes. Similar deformational moduli were determined from these techniques, except for the Goodman Jack results. The finite element technique was utilized for both the non-linear thermal and mechanical computations. An incremental stiffness method with residual force correction was employed to solve the non-linear problem by piecewise linearization. Two and three dimensional thermomechanical scoping calculations were made to assess the significance of various parameters and associated errors with geometrical idealizations. Both heater tests were modeled as two dimensional axisymmetric geometry with water assumed to be absent. Instrument response was predicted for all of the thermocouples, extensometers, USBM borehole deformation and IRAD gages for the entire duration of both tests

  15. Prediction of rodent carcinogenic potential of naturally occurring chemicals in the human diet using high-throughput QSAR predictive modeling

    International Nuclear Information System (INIS)

    Valerio, Luis G.; Arvidson, Kirk B.; Chanderbhan, Ronald F.; Contrera, Joseph F.

    2007-01-01

    Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest is MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals

  16. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  17. Atmospheric resuspension of radionuclides. Model testing using Chernobyl data

    International Nuclear Information System (INIS)

    Garger, E.; Lev, T.; Talerko, N.; Galeriu, D.; Garland, J.; Hoffman, O.; Nair, S.; Thiessen, K.; Miller, C.; Mueller, H.; Kryshev, A.

    1996-10-01

    Resuspension can be an important secondary source of contamination after a release has stopped, as well as a source of contamination for people and areas not exposed to the original release. The inhalation of resuspended radionuclides contributes to the overall dose received by exposed individuals. Based on measurements collected after the Chernobyl accident, Scenario R was developed to provide an opportunity to test existing mathematical models of contamination resuspension. In particular, this scenario provided the opportunity to examine data and test models for atmospheric resuspension of radionuclides at several different locations from the release, to investigate resuspension processes on both local and regional scales, and to investigate the importance of seasonal variations of these processes. Participants in the test exercise were provided with information for three different types of locations: (1) within the 30-km zone, where local resuspension processes are expected to dominate; (2) a large urban location (Kiev) 120 km from the release site, where vehicular traffic is expected to be the dominant mechanism for resuspension; and (3) an agricultural area 40-60 km from the release site, where highly contaminated upwind 'hot spots' are expected to be important. Input information included characteristics of the ground contamination around specific sites, climatological data for the sites, characteristics of the terrain and topography, and locations of the sampling sites. Participants were requested to predict the average (quarterly and yearly) concentrations of 137 Cs in air at specified locations due to resuspension of Chernobyl fallout; predictions for 90 Sr and 239 + 240 Pu were also requested for one location and time point. Predictions for specified resuspension factors and rates were also requested. Most participants used empirical models for the resuspension factor as a function of time K(t), as opposed to process-based models. While many of these

  18. Atmospheric resuspension of radionuclides. Model testing using Chernobyl data

    Energy Technology Data Exchange (ETDEWEB)

    Garger, E.; Lev, T.; Talerko, N. [Inst. of Radioecology UAAS, Kiev (Ukraine); Galeriu, D. [Institute of Atomic Physics, Bucharest (Romania); Garland, J. [Consultant (United Kingdom); Hoffman, O.; Nair, S.; Thiessen, K. [SENES, Oak Ridge, TN (United States); Miller, C. [Centre for Disease Control, Atlanta, GA (United States); Mueller, H. [GSF - Inst. fuer Strahlenschultz, Neuherberg (Germany); Kryshev, A. [Moscow State Univ. (Russian Federation)

    1996-10-01

    Resuspension can be an important secondary source of contamination after a release has stopped, as well as a source of contamination for people and areas not exposed to the original release. The inhalation of resuspended radionuclides contributes to the overall dose received by exposed individuals. Based on measurements collected after the Chernobyl accident, Scenario R was developed to provide an opportunity to test existing mathematical models of contamination resuspension. In particular, this scenario provided the opportunity to examine data and test models for atmospheric resuspension of radionuclides at several different locations from the release, to investigate resuspension processes on both local and regional scales, and to investigate the importance of seasonal variations of these processes. Participants in the test exercise were provided with information for three different types of locations: (1) within the 30-km zone, where local resuspension processes are expected to dominate; (2) a large urban location (Kiev) 120 km from the release site, where vehicular traffic is expected to be the dominant mechanism for resuspension; and (3) an agricultural area 40-60 km from the release site, where highly contaminated upwind 'hot spots' are expected to be important. Input information included characteristics of the ground contamination around specific sites, climatological data for the sites, characteristics of the terrain and topography, and locations of the sampling sites. Participants were requested to predict the average (quarterly and yearly) concentrations of 137 Cs in air at specified locations due to resuspension of Chernobyl fallout; predictions for 90 Sr and 239 + 240 Pu were also requested for one location and time point. Predictions for specified resuspension factors and rates were also requested. Most participants used empirical models for the resuspension factor as a function of time K(t), as opposed to process-based models. While many of

  19. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  20. Vaginal birth after caesarean section prediction models: a UK comparative observational study.

    Science.gov (United States)

    Mone, Fionnuala; Harrity, Conor; Mackie, Adam; Segurado, Ricardo; Toner, Brenda; McCormick, Timothy R; Currie, Aoife; McAuliffe, Fionnuala M

    2015-10-01

    Primarily, to assess the performance of three statistical models in predicting successful vaginal birth in patients attempting a trial of labour after one previous lower segment caesarean section (TOLAC). The statistically most reliable models were subsequently subjected to validation testing in a local antenatal population. A retrospective observational study was performed with study data collected from the Northern Ireland Maternity Service Database (NIMATs). The study population included all women that underwent a TOLAC (n=385) from 2010 to 2012 in a regional UK obstetric unit. Data was collected from the Northern Ireland Maternity Service Database (NIMATs). Area under the curve (AUC) and correlation analysis was performed. Of the three prediction models evaluated, AUC calculations for the Smith et al., Grobman et al. and Troyer and Parisi Models were 0.74, 0.72 and 0.65, respectively. Using the Smith et al. model, 52% of women had a low risk of caesarean section (CS) (predicted VBAC >72%) and 20% had a high risk of CS (predicted VBAC <60%), of whom 20% and 63% had delivery by CS. The fit between observed and predicted outcome in this study cohort using the Smith et al. and Grobman et al. models were greatest (Chi-square test, p=0.228 and 0.904), validating both within the population. The Smith et al. and Grobman et al. models could potentially be utilized within the UK to provide women with an informed choice when deciding on mode of delivery after a previous CS. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Model Predictive Control with Constraints of a Wind Turbine

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    2007-01-01

    Model predictive control of wind turbines offer a more systematic approach of constructing controllers that handle constraints while focusing on the main control objective. In this article several controllers are designed for different wind conditions and appropriate switching conditions ensure a...... an efficient control of the wind turbine over the entire range of wind speeds. Both onshore and floating offshore wind turbines are tested with the controllers.......Model predictive control of wind turbines offer a more systematic approach of constructing controllers that handle constraints while focusing on the main control objective. In this article several controllers are designed for different wind conditions and appropriate switching conditions ensure...

  2. Prediction of reflood behavior for tests with differing axial power shapes using WCOBRA/TRAC

    International Nuclear Information System (INIS)

    Bajorek, S.M.; Hochreiter, L.E.

    1991-01-01

    The rector core power shape can vary over the fuel cycle due to load follow, control rod movement, burnup effects and Xenon transients. a best estimate thermal-hydraulic code must be able to accurately predict the reflooding behavior for different axial power shapes in order to find the power shapes effects on the loss-of-coolant peak cladding temperature. Several different reflood heat transfer experiments have been performed at the same or similar PWR reflood conditions with different axial power shapes. These experiments have different rod diameters, were full length, 3.65 m (12 feet) in height, and had simple egg crate grids. The WCOBRA/TRAC code has been used to model several different tests from these three experiments to examine the code's capability to predict the reflood transient for different power shapes, with a consistent model and noding scheme. This paper describes these different experiments, their power shapes, and the test conditions. The WCOBRA/TRAC code is described as well as the noding scheme, and the calculated results will be compared in detail with the test data rod temperatures. An overall assessment of the code's predictions of these experiments is presented

  3. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  4. A Vertically Flow-Following, Icosahedral Grid Model for Medium-Range and Seasonal Prediction. Part 1: Model Description

    Science.gov (United States)

    Bleck, Rainer; Bao, Jian-Wen; Benjamin, Stanley G.; Brown, John M.; Fiorino, Michael; Henderson, Thomas B.; Lee, Jin-Luen; MacDonald, Alexander E.; Madden, Paul; Middlecoff, Jacques; hide

    2015-01-01

    A hydrostatic global weather prediction model based on an icosahedral horizontal grid and a hybrid terrain following/ isentropic vertical coordinate is described. The model is an extension to three spatial dimensions of a previously developed, icosahedral, shallow-water model featuring user-selectable horizontal resolution and employing indirect addressing techniques. The vertical grid is adaptive to maximize the portion of the atmosphere mapped into the isentropic coordinate subdomain. The model, best described as a stacked shallow-water model, is being tested extensively on real-time medium-range forecasts to ready it for possible inclusion in operational multimodel ensembles for medium-range to seasonal prediction.

  5. On Practical tuning of Model Uncertainty in Wind Turbine Model Predictive Control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Hovgaard, Tobias

    2015-01-01

    Model predictive control (MPC) has in previous works been applied on wind turbines with promising results. These results apply linear MPC, i.e., linear models linearized at different operational points depending on the wind speed. The linearized models are derived from a nonlinear first principles...... model of a wind turbine. In this paper, we investigate the impact of this approach on the performance of a wind turbine. In particular, we focus on the most non-linear operational ranges of a wind turbine. The MPC controller is designed for, tested, and evaluated at an industrial high fidelity wind...

  6. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  7. A predictive model for the behavior of radionuclides in lake systems

    International Nuclear Information System (INIS)

    Monte, L.

    1993-01-01

    This paper describes a predictive model for the behavior of 137Cs in lacustrine systems. The model was tested by comparing its predictions to contamination data collected in various lakes in Europe and North America. The migration of 137Cs from catchment basin and from bottom sediments to lake water was discussed in detail; these two factors influence the time behavior of contamination in lake water. The contributions to the levels of radionuclide concentrations in water, due to the above factors, generally increase in the long run. The uncertainty of the model, used as a generic tool for prediction of the levels of contamination in lake water, was evaluated. Data sets of water contamination analyzed in the present work suggest that the model uncertainty, at a 68% confidence level, is a factor 1.9

  8. Westinghouse-GOTHIC modeling of NUPEC's hydrogen mixing and distribution test M-4-3

    International Nuclear Information System (INIS)

    Ofstun, R.P.; Woodcock, J.; Paulsen, D.L.

    1994-01-01

    NUPEC (NUclear Power Engineering Corporation) ran a series of hydrogen mixing and distribution tests which were completed in April 1992. These tests were performed in a 1/4 linearly scaled model containment and were specifically designed to be used for computer code validation. The results of test M-4-3, along with predictions from several computer codes, were presented to the participants of ISP-35 (a blind test comparison of code calculated results with data from NUPEC test M-7-1) at a meeting in March 1993. Test M-4-3, which was similar to test M-7-1, released a mixture of steam and helium into a steam generator compartment located on the lower level of containment. The majority of codes did well at predicting the global pressure and temperature trends, however, some typical lumped parameter modeling problems were identified at that time. In particular, the models had difficulty predicting the temperature and helium concentrations in the so called 'dead ended volumes' (pressurizer compartment and in-core chase region). Modeling of the dead-ended compartments using a single lumped parameter volume did not yield the appropriate temperature and helium response within that volume. The Westinghouse-GOTHIC (WGOTHIC) computer code is capable of modeling in one, two or three dimensions (or any combination thereof). This paper describes the WGOTHIC modeling of the dead-ended compartments for NUPEC test M-4-3 and gives comparisons to the test data. 1 ref., 1 tab., 14 figs

  9. Predicting the weathering of fuel and oil spills: A diffusion-limited evaporation model.

    Science.gov (United States)

    Kotzakoulakis, Konstantinos; George, Simon C

    2018-01-01

    The majority of the evaporation models currently available in the literature for the prediction of oil spill weathering do not take into account diffusion-limited mass transport and the formation of a concentration gradient in the oil phase. The altered surface concentration of the spill caused by diffusion-limited transport leads to a slower evaporation rate compared to the predictions of diffusion-agnostic evaporation models. The model presented in this study incorporates a diffusive layer in the oil phase and predicts the diffusion-limited evaporation rate. The information required is the composition of the fluid from gas chromatography or alternatively the distillation data. If the density or a single viscosity measurement is available the accuracy of the predictions is higher. Environmental conditions such as water temperature, air pressure and wind velocity are taken into account. The model was tested with synthetic mixtures, petroleum fuels and crude oils with initial viscosities ranging from 2 to 13,000 cSt. The tested temperatures varied from 0 °C to 23.4 °C and wind velocities from 0.3 to 3.8 m/s. The average absolute deviation (AAD) of the diffusion-limited model ranged between 1.62% and 24.87%. In comparison, the AAD of a diffusion-agnostic model ranged between 2.34% and 136.62% against the same tested fluids. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  11. Putting atomic diffusion theory of magnetic ApBp stars to the test: evaluation of the predictions of time-dependent diffusion models

    Science.gov (United States)

    Kochukhov, O.; Ryabchikova, T. A.

    2018-02-01

    A series of recent theoretical atomic diffusion studies has address the challenging problem of predicting inhomogeneous vertical and horizontal chemical element distributions in the atmospheres of magnetic ApBp stars. Here we critically assess the most sophisticated of such diffusion models - based on a time-dependent treatment of the atomic diffusion in a magnetized stellar atmosphere - by direct comparison with observations as well by testing the widely used surface mapping tools with the spectral line profiles predicted by this theory. We show that the mean abundances of Fe and Cr are grossly underestimated by the time-dependent theoretical diffusion model, with discrepancies reaching a factor of 1000 for Cr. We also demonstrate that Doppler imaging inversion codes, based either on modelling of individual metal lines or line-averaged profiles simulated according to theoretical three-dimensional abundance distribution, are able to reconstruct correct horizontal chemical spot maps despite ignoring the vertical abundance variation. These numerical experiments justify a direct comparison of the empirical two-dimensional Doppler maps with theoretical diffusion calculations. This comparison is generally unfavourable for the current diffusion theory, as very few chemical elements are observed to form overabundance rings in the horizontal field regions as predicted by the theory and there are numerous examples of element accumulations in the vicinity of radial field zones, which cannot be explained by diffusion calculations.

  12. External validation of the Cairns Prediction Model (CPM) to predict conversion from laparoscopic to open cholecystectomy.

    Science.gov (United States)

    Hu, Alan Shiun Yew; Donohue, Peter O'; Gunnarsson, Ronny K; de Costa, Alan

    2018-03-14

    Valid and user-friendly prediction models for conversion to open cholecystectomy allow for proper planning prior to surgery. The Cairns Prediction Model (CPM) has been in use clinically in the original study site for the past three years, but has not been tested at other sites. A retrospective, single-centred study collected ultrasonic measurements and clinical variables alongside with conversion status from consecutive patients who underwent laparoscopic cholecystectomy from 2013 to 2016 in The Townsville Hospital, North Queensland, Australia. An area under the curve (AUC) was calculated to externally validate of the CPM. Conversion was necessary in 43 (4.2%) out of 1035 patients. External validation showed an area under the curve of 0.87 (95% CI 0.82-0.93, p = 1.1 × 10 -14 ). In comparison with most previously published models, which have an AUC of approximately 0.80 or less, the CPM has the highest AUC of all published prediction models both for internal and external validation. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  13. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    2009-08-01

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  14. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  15. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  16. Babcock and Wilcox model for predicting in-reactor densification

    International Nuclear Information System (INIS)

    Buescher, B.J.; Pegram, J.W.

    1977-07-01

    The B and W densification model is based on a correlation between in-reactor densification and a thermal resintering test. The densification model has been found to predict in-reactor densification with a remarkable degree of accuracy for fuel pellets operated at heat rates above 5 kW/ft and with considerable conservatism for pellelts operating at heat rates below 5 kW/ft

  17. Prospective Tests on Biological Models of Acupuncture

    Directory of Open Access Journals (Sweden)

    Charles Shang

    2009-01-01

    Full Text Available The biological effects of acupuncture include the regulation of a variety of neurohumoral factors and growth control factors. In science, models or hypotheses with confirmed predictions are considered more convincing than models solely based on retrospective explanations. Literature review showed that two biological models of acupuncture have been prospectively tested with independently confirmed predictions: The neurophysiology model on the long-term effects of acupuncture emphasizes the trophic and anti-inflammatory effects of acupuncture. Its prediction on the peripheral effect of endorphin in acupuncture has been confirmed. The growth control model encompasses the neurophysiology model and suggests that a macroscopic growth control system originates from a network of organizers in embryogenesis. The activity of the growth control system is important in the formation, maintenance and regulation of all the physiological systems. Several phenomena of acupuncture such as the distribution of auricular acupuncture points, the long-term effects of acupuncture and the effect of multimodal non-specific stimulation at acupuncture points are consistent with the growth control model. The following predictions of the growth control model have been independently confirmed by research results in both acupuncture and conventional biomedical sciences: (i Acupuncture has extensive growth control effects. (ii Singular point and separatrix exist in morphogenesis. (iii Organizers have high electric conductance, high current density and high density of gap junctions. (iv A high density of gap junctions is distributed as separatrices or boundaries at body surface after early embryogenesis. (v Many acupuncture points are located at transition points or boundaries between different body domains or muscles, coinciding with the connective tissue planes. (vi Some morphogens and organizers continue to function after embryogenesis. Current acupuncture research suggests a

  18. Differential Prediction Generalization in College Admissions Testing

    Science.gov (United States)

    Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.

    2016-01-01

    We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…

  19. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    Science.gov (United States)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  20. Predicting turns in proteins with a unified model.

    Directory of Open Access Journals (Sweden)

    Qi Song

    Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  1. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    Science.gov (United States)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  2. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  3. PREDICTIVE MODELS FOR SUPPORT OF INCIDENT MANAGEMENT PROCESS IN IT SERVICE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Martin SARNOVSKY

    2018-03-01

    Full Text Available ABSTRACT The work presented in this paper is focused on creating of predictive models that help in the process of incident resolution and implementation of IT infrastructure changes to increase the overall support of IT management. Our main objective was to build the predictive models using machine learning algorithms and CRISP-DM methodology. We used the incident and related changes database obtained from the IT environment of the Rabobank Group company, which contained information about the processing of the incidents during the incident management process. We decided to investigate the dependencies between the incident observation on particular infrastructure component and the actual source of the incident as well as the dependency between the incidents and related changes in the infrastructure. We used Random Forests and Gradient Boosting Machine classifiers in the process of identification of incident source as well as in the prediction of possible impact of the observed incident. Both types of models were tested on testing set and evaluated using defined metrics.

  4. Target Soil Impact Verification: Experimental Testing and Kayenta Constitutive Modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Broome, Scott Thomas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Flint, Gregory Mark [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Dewers, Thomas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Newell, Pania [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report details experimental testing and constitutive modeling of sandy soil deformation under quasi - static conditions. This is driven by the need to understand constitutive response of soil to target/component behavior upon impact . An experimental and constitutive modeling program was followed to determine elastic - plastic properties and a compressional failure envelope of dry soil . One hydrostatic, one unconfined compressive stress (UCS), nine axisymmetric compression (ACS) , and one uniaxial strain (US) test were conducted at room temperature . Elastic moduli, assuming isotropy, are determined from unload/reload loops and final unloading for all tests pre - failure and increase monotonically with mean stress. Very little modulus degradation was discernable from elastic results even when exposed to mean stresses above 200 MPa . The failure envelope and initial yield surface were determined from peak stresses and observed onset of plastic yielding from all test results. Soil elasto - plastic behavior is described using the Brannon et al. (2009) Kayenta constitutive model. As a validation exercise, the ACS - parameterized Kayenta model is used to predict response of the soil material under uniaxial strain loading. The resulting parameterized and validated Kayenta model is of high quality and suitable for modeling sandy soil deformation under a range of conditions, including that for impact prediction.

  5. Formability prediction for AHSS materials using damage models

    Science.gov (United States)

    Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara

    2017-05-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.

  6. Formability prediction for AHSS materials using damage models

    International Nuclear Information System (INIS)

    Amaral, R.; Miranda, Sara; Santos, Abel D.; José, César de Sá

    2017-01-01

    Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches. (paper)

  7. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2014-01-01

    Full Text Available A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube design (Opt LHD method and radial basis function neural network (RBFNN are utilized. In order to decouple the variables between different disciplines, a concurrent subspace design (CSD algorithm is suggested. The established predictive distribution model is verified in a dynamic simulation. The off-line optimization results show that the proposed process optimization stream can improve the regenerative energy recovery efficiency, and optimize the braking stability simultaneously. Further simulation tests demonstrate that the predictive distribution model can achieve high prediction accuracy and is very beneficial for the cooperative braking system.

  8. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    Directory of Open Access Journals (Sweden)

    Daniel Koretz

    2016-09-01

    Full Text Available The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA from high school GPA and both college admissions and high school tests in mathematics and English. In both systems, the choice of tests had only trivial effects on the aggregate prediction of FGPA. Adding either test to an equation that included the other had only trivial effects on prediction. Although the findings suggest that the choice of test might advantage or disadvantage different students, it had no substantial effect on the over- and underprediction of FGPA for students classified by race-ethnicity or poverty.

  9. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  10. Liver function tests and risk prediction of incident type 2 diabetes : evaluation in two independent cohorts

    NARCIS (Netherlands)

    Abbasi, Ali; Bakker, Stephan J. L.; Corpeleijn, Eva; van der A, Daphne L.; Gansevoort, Ron T.; Gans, Rijk O. B.; Peelen, Linda M.; van der Schouw, Yvonne T.; Stolk, Ronald P.; Navis, Gerjan; Spijkerman, Annemieke M. W.; Beulens, Joline W. J.

    2012-01-01

    Background: Liver function tests might predict the risk of type 2 diabetes. An independent study evaluating utility of these markers compared with an existing prediction model is yet lacking. Methods and Findings: We performed a case-cohort study, including random subcohort (6.5%) from 38,379

  11. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  12. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Leaching of saltstone: Laboratory and field testing and mathematical modeling

    International Nuclear Information System (INIS)

    Grant, M.W.; Langton, C.A.; Oblath, S.B.; Pepper, D.W.; Wallace, R.M.; Wilhite, E.L.; Yau, W.W.F.

    1987-01-01

    A low-level alkaline salt solution will be a byproduct in the processing of high-level waste at the Savannah River Plant (SRP). This solution will be incorporated into a wasteform, saltstone, and disposed of in surface vaults. Laboratory and field leach testing and mathematical modeling have demonstrated the predictability of contaminant release from cement wasteforms. Saltstone disposal in surface vaults will meet the design objective, which is to meet drinking water standards in shallow groundwater at the disposal area boundary. Diffusion is the predominant mechanism for release of contaminants to the environment. Leach testing in unsaturated soil, at soil moisture levels above 1 wt %, has shown no difference in leach rate compared to leaching in distilled water. Field leach testing of three thirty-ton blocks of saltstone in lysimeters has been underway since January 1984. Mathematical models were applied to assess design features for saltstone disposal. One dimensional infinite-composite and semi-infinite analytical models were developed for assessing diffusion of nitrate from saltstone through a cement barrier. Numerical models, both finite element and finite difference, were validated by comparison of model predictions with the saltstone lysimeter results. Validated models were used to assess the long-term performance of the saltstone stored in surface vaults. The maximum concentrations of all contaminants released from saltstone to shallow groundwater are predicted to be below drinking water standards at the disposal area boundary. 5 refs., 11 figs., 5 tabs

  14. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  16. The Prediction of Drought-Related Tree Mortality in Vegetation Models

    Science.gov (United States)

    Schwinning, S.; Jensen, J.; Lomas, M. R.; Schwartz, B.; Woodward, F. I.

    2013-12-01

    Drought-related tree die-off events at regional scales have been reported from all wooded continents and it has been suggested that their frequency may be increasing. The prediction of these drought-related die-off events from regional to global scales has been recognized as a critical need for the conservation of forest resources and improving the prediction of climate-vegetation interactions. However, there is no conceptual consensus on how to best approach the quantitative prediction of tree mortality. Current models use a variety of mechanisms to represent demographic events. Mortality is modeled to represent a number of different processes, including death by fire, wind throw, extreme temperatures, and self-thinning, and each vegetation model differs in the emphasis they place on specific mechanisms. Dynamic global vegetation models generally operate on the assumption of incremental vegetation shift due to changes in the carbon economy of plant functional types and proportional effects on recruitment, growth, competition and mortality, but this may not capture sudden and sweeping tree death caused by extreme weather conditions. We tested several different approaches to predicting tree mortality within the framework of the Sheffield Dynamic Global Vegetation Model. We applied the model to the state of Texas, USA, which in 2011 experienced extreme drought conditions, causing the death of an estimated 300 million trees statewide. We then compared predicted to actual mortality to determine which algorithms most accurately predicted geographical variation in tree mortality. We discuss implications regarding the ongoing debate on the causes of tree death.

  17. Model for prediction of strip temperature in hot strip steel mill

    International Nuclear Information System (INIS)

    Panjkovic, Vladimir

    2007-01-01

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good

  18. Model for prediction of strip temperature in hot strip steel mill

    Energy Technology Data Exchange (ETDEWEB)

    Panjkovic, Vladimir [BlueScope Steel, TEOB, 1 Bayview Road, Hastings Vic. 3915 (Australia)]. E-mail: Vladimir.Panjkovic@BlueScopeSteel.com

    2007-10-15

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good.

  19. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Raszmann, Emma; Baker, Kyri; Shi, Ying; Christensen, Dane

    2017-02-22

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modeling approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.

  20. Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.

    Science.gov (United States)

    Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T

    2017-05-01

    We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large

  1. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  2. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  3. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  4. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2015-08-01

    Full Text Available This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS test, referred to as the KS Predictive Accuracy (KSPA test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.

  6. Acoustic results of the Boeing model 360 whirl tower test

    Science.gov (United States)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  7. Visuospatial Aptitude Testing Differentially Predicts Simulated Surgical Skill.

    Science.gov (United States)

    Hinchcliff, Emily; Green, Isabel; Destephano, Christopher; Cox, Mary; Smink, Douglas; Kumar, Amanika; Hokenstad, Erik; Bengtson, Joan; Cohen, Sarah

    2018-02-05

    To determine if visuospatial perception (VSP) testing is correlated to simulated or intraoperative surgical performance as rated by the American College of Graduate Medical Education (ACGME) milestones. Classification II-2 SETTING: Two academic training institutions PARTICIPANTS: 41 residents, including 19 Brigham and Women's Hospital and 22 Mayo Clinic residents from three different specialties (OBGYN, general surgery, urology). Participants underwent three different tests: visuospatial perception testing (VSP), Fundamentals of Laparoscopic Surgery (FLS®) peg transfer, and DaVinci robotic simulation peg transfer. Surgical grading from the ACGME milestones tool was obtained for each participant. Demographic and subject background information was also collected including specialty, year of training, prior experience with simulated skills, and surgical interest. Standard statistical analysis using Student's t test were performed, and correlations were determined using adjusted linear regression models. In univariate analysis, BWH and Mayo training programs differed in both times and overall scores for both FLS® peg transfer and DaVinci robotic simulation peg transfer (p<0.05 for all). Additionally, type of residency training impacted time and overall score on robotic peg transfer. Familiarity with tasks correlated with higher score and faster task completion (p= 0.05 for all except VSP score). There was no difference in VSP scores by program, specialty, or year of training. In adjusted linear regression modeling, VSP testing was correlated only to robotic peg transfer skills (average time p=0.006, overall score p=0.001). Milestones did not correlate to either VSP or surgical simulation testing. VSP score was correlated with robotic simulation skills but not with FLS skills or ACGME milestones. This suggests that the ability of VSP score to predict competence differs between tasks. Therefore, further investigation is required into aptitude testing, especially prior

  8. A consensus approach for estimating the predictive accuracy of dynamic models in biology.

    Science.gov (United States)

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R

    2015-04-01

    Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Prediction of moisture variation during composting process: A comparison of mathematical models.

    Science.gov (United States)

    Wang, Yongjiang; Ai, Ping; Cao, Hongliang; Liu, Zhigang

    2015-10-01

    This study was carried out to develop and compare three models for simulating the moisture content during composting. Model 1 described changes in water content using mass balance, while Model 2 introduced a liquid-gas transferred water term. Model 3 predicted changes in moisture content without complex degradation kinetics. Average deviations for Model 1-3 were 8.909, 7.422 and 5.374 kg m(-3) while standard deviations were 10.299, 8.374 and 6.095, respectively. The results showed that Model 1 is complex and involves more state variables, but can be used to reveal the effect of humidity on moisture content. Model 2 tested the hypothesis of liquid-gas transfer and was shown to be capable of predicting moisture content during composting. Model 3 could predict water content well without considering degradation kinetics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Predictive modeling of coral disease distribution within a reef system.

    Directory of Open Access Journals (Sweden)

    Gareth J Williams

    2010-02-01

    Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to

  11. Collaborative testing of turbulence models

    Science.gov (United States)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  13. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  14. Confronting species distribution model predictions with species functional traits.

    Science.gov (United States)

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  15. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Life Prediction on a T700 Carbon Fiber Reinforced Cylinder with Limited Accelerated Life Testing Data

    Directory of Open Access Journals (Sweden)

    Ma Xiaobing

    2015-01-01

    Full Text Available An accelerated life testing investigation was conducted on a composite cylinder that consists of aluminum alloy and T700 carbon fiber. The ultimate failure stress predictions of cylinders were obtained by the mixing rule and verified by the blasting static pressure method. Based on the stress prediction of cylinder under working conditions, the constant stress accelerated life test of the cylinder was designed. However, the failure data cannot be sufficiently obtained by the accelerated life test due to the time limitation. Therefore, most of the data presented to be high censored in high stress level and zero-failure data in low stress level. When using the traditional method for rupture life prediction, the results showed to be of lower confidence. In this study, the consistency of failure mechanism for carbon fiber and cylinder was analyzed firstly. According to the analysis result, the statistical test information of carbon fiber could be utilized for the accelerated model constitution. Then, rupture life prediction method for cylinder was proposed based on the accelerated life test data and carbon fiber test data. In this way, the life prediction accuracy of cylinder could be improved obviously, and the results showed that the accuracy of this method increased by 35%.

  17. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  18. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  19. Improved prediction of genetic predisposition to psychiatric disorders using genomic feature best linear unbiased prediction models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Demontis, Ditte; Børglum, Anders

    is enriched for causal variants. Here we apply the GFBLUP model to a small schizophrenia case-control study to test the promise of this model on psychiatric disorders, and hypothesize that the performance will be increased when applying the model to a larger ADHD case-control study if the genomic feature...... contains the causal variants. Materials and Methods: The schizophrenia study consisted of 882 controls and 888 schizophrenia cases genotyped for 520,000 SNPs. The ADHD study contained 25,954 controls and 16,663 ADHD cases with 8,4 million imputed genotypes. Results: The predictive ability for schizophrenia.......6% for the null model). Conclusion: The improvement in predictive ability for schizophrenia was marginal, however, greater improvement is expected for the larger ADHD data....

  20. Predictive models reduce talent development costs in female gymnastics.

    Science.gov (United States)

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  1. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  2. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  3. The prediction of creep damage in Type 347 weld metal: part II creep fatigue tests

    International Nuclear Information System (INIS)

    Spindler, M.W.

    2005-01-01

    Calculations of creep damage under conditions of strain control are often carried out using either a time fraction approach or a ductility exhaustion approach. In part I of this paper the rupture strength and creep ductility data for a Type 347 weld metal were fitted to provide the material properties that are used to calculate creep damage. Part II of this paper examines whether the time fraction approach or the ductility exhaustion approach gives the better predictions of creep damage in creep-fatigue tests on the same Type 347 weld metal. In addition, a new creep damage model, which was developed by removing some of the simplifying assumptions that are made in the ductility exhaustion approach, was used. This new creep damage model is a function of the strain rate, stress and temperature and was derived from creep and constant strain rate test data using a reverse modelling technique (see part I of this paper). It is shown that the new creep damage model gives better predictions of creep damage in the creep-fatigue tests than the time fraction and the ductility exhaustion approaches

  4. Predicting hyperketonemia by logistic and linear regression using test-day milk and performance variables in early-lactation Holstein and Jersey cows.

    Science.gov (United States)

    Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M

    2018-03-01

    Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83

  5. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India); Gupta, Shikha; Rai, Premanjali [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India)

    2013-10-15

    abilities of the interspecies GRNN model to predict the carcinogenic potency of diverse chemicals. - Highlights: • Global robust models constructed for carcinogenicity prediction of diverse chemicals. • Tanimoto/BDS test revealed structural diversity of chemicals and nonlinearity in data. • PNN/GRNN successfully predicted carcinogenicity/carcinogenic potency of chemicals. • Developed interspecies PNN/GRNN models for carcinogenicity prediction. • Proposed models can be used as tool to predict carcinogenicity of new chemicals.

  6. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  7. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Not just the norm: exemplar-based models also predict face aftereffects.

    Science.gov (United States)

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  9. Artificial neural network models for prediction of intestinal permeability of oligopeptides

    Directory of Open Access Journals (Sweden)

    Kim Min-Kook

    2007-07-01

    Full Text Available Abstract Background Oral delivery is a highly desirable property for candidate drugs under development. Computational modeling could provide a quick and inexpensive way to assess the intestinal permeability of a molecule. Although there have been several studies aimed at predicting the intestinal absorption of chemical compounds, there have been no attempts to predict intestinal permeability on the basis of peptide sequence information. To develop models for predicting the intestinal permeability of peptides, we adopted an artificial neural network as a machine-learning algorithm. The positive control data consisted of intestinal barrier-permeable peptides obtained by the peroral phage display technique, and the negative control data were prepared from random sequences. Results The capacity of our models to make appropriate predictions was validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC curve (the ROC score. The training and test set statistics indicated that our models were of strikingly good quality and could discriminate between permeable and random sequences with a high level of confidence. Conclusion We developed artificial neural network models to predict the intestinal permeabilities of oligopeptides on the basis of peptide sequence information. Both binary and VHSE (principal components score Vectors of Hydrophobic, Steric and Electronic properties descriptors produced statistically significant training models; the models with simple neural network architectures showed slightly greater predictive power than those with complex ones. We anticipate that our models will be applicable to the selection of intestinal barrier-permeable peptides for generating peptide drugs or peptidomimetics.

  10. Using predictive uncertainty analysis to optimise tracer test design and data acquisition

    Science.gov (United States)

    Wallis, Ilka; Moore, Catherine; Post, Vincent; Wolf, Leif; Martens, Evelien; Prommer, Henning

    2014-07-01

    Tracer injection tests are regularly-used tools to identify and characterise flow and transport mechanisms in aquifers. Examples of practical applications are manifold and include, among others, managed aquifer recharge schemes, aquifer thermal energy storage systems and, increasingly important, the disposal of produced water from oil and shale gas wells. The hydrogeological and geochemical data collected during the injection tests are often employed to assess the potential impacts of injection on receptors such as drinking water wells and regularly serve as a basis for the development of conceptual and numerical models that underpin the prediction of potential impacts. As all field tracer injection tests impose substantial logistical and financial efforts, it is crucial to develop a solid a-priori understanding of the value of the various monitoring data to select monitoring strategies which provide the greatest return on investment. In this study, we demonstrate the ability of linear predictive uncertainty analysis (i.e. “data worth analysis”) to quantify the usefulness of different tracer types (bromide, temperature, methane and chloride as examples) and head measurements in the context of a field-scale aquifer injection trial of coal seam gas (CSG) co-produced water. Data worth was evaluated in terms of tracer type, in terms of tracer test design (e.g., injection rate, duration of test and the applied measurement frequency) and monitoring disposition to increase the reliability of injection impact assessments. This was followed by an uncertainty targeted Pareto analysis, which allowed the interdependencies of cost and predictive reliability for alternative monitoring campaigns to be compared directly. For the evaluated injection test, the data worth analysis assessed bromide as superior to head data and all other tracers during early sampling times. However, with time, chloride became a more suitable tracer to constrain simulations of physical transport

  11. Long-term orbit prediction for Tiangong-1 spacecraft using the mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Cheng, Haowen; Hu, Songjie; Duan, Jianfeng

    2015-03-01

    China is planning to complete its first space station by 2020. For the long-term management and maintenance, the orbit of the space station needs to be predicted for a long period of time. Since the space station is expected to work in a low-Earth orbit, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 20 days, the error in the a priori atmosphere model, if not properly corrected, could induce a semi-major axis error of up to a few kilometers and an overall position error of several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSISE00. The a priori reference mean density can be corrected during the orbit determination. For the long-term orbit prediction, we use sufficiently long period of observations and obtain a series of the diurnal mean densities. This series contains the recent variation of the atmosphere density and can be analyzed for various periodic components. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. Here we carry out the test with China's Tiangong-1 spacecraft at the altitude of about 340 km and we show that this method is simple and flexible. The densities predicted with this approach can serve in the long-term orbit prediction. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700 m and overall position errors better than 400 km.

  12. An Empirical Test of a Model of Resistance to Persuasion.

    Science.gov (United States)

    And Others; Burgoon, Michael

    1978-01-01

    Tests a model of resistance to persuasion based upon variables not considered by earlier congruity and inoculation models. Supports the prediction that the kind of critical response set induced and the target of the criticism are mediators of resistance to persuasion. (JMF)

  13. Prediction of persistent hemodynamic depression after carotid angioplasty and stenting using artificial neural network model.

    Science.gov (United States)

    Jeon, Jin Pyeong; Kim, Chulho; Oh, Byoung-Doo; Kim, Sun Jeong; Kim, Yu-Seop

    2018-01-01

    To assess and compare predictive factors for persistent hemodynamic depression (PHD) after carotid artery angioplasty and stenting (CAS) using artificial neural network (ANN) and multiple logistic regression (MLR) or support vector machines (SVM) models. A retrospective data set of patients (n=76) who underwent CAS from 2007 to 2014 was used as input (training cohort) to a back-propagation ANN using TensorFlow platform. PHD was defined when systolic blood pressure was less than 90mmHg or heart rate was less 50 beats/min that lasted for more than one hour. The resulting ANN was prospectively tested in 33 patients (test cohort) and compared with MLR or SVM models according to accuracy and receiver operating characteristics (ROC) curve analysis. No significant difference in baseline characteristics between the training cohort and the test cohort was observed. PHD was observed in 21 (27.6%) patients in the training cohort and 10 (30.3%) patients in the test cohort. In the training cohort, the accuracy of ANN for the prediction of PHD was 98.7% and the area under the ROC curve (AUROC) was 0.961. In the test cohort, the number of correctly classified instances was 32 (97.0%) using the ANN model. In contrast, the accuracy rate of MLR or SVM model was both 75.8%. ANN (AUROC: 0.950; 95% CI [confidence interval]: 0.813-0.996) showed superior predictive performance compared to MLR model (AUROC: 0.796; 95% CI: 0.620-0.915, p<0.001) or SVM model (AUROC: 0.885; 95% CI: 0.725-0.969, p<0.001). The ANN model seems to have more powerful prediction capabilities than MLR or SVM model for persistent hemodynamic depression after CAS. External validation with a large cohort is needed to confirm our results. Copyright © 2017. Published by Elsevier B.V.

  14. The prediction of intelligence in preschool children using alternative models to regression.

    Science.gov (United States)

    Finch, W Holmes; Chang, Mei; Davis, Andrew S; Holden, Jocelyn E; Rothlisberg, Barbara A; McIntosh, David E

    2011-12-01

    Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford-Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression trees provided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.

  15. Applying model predictive control to power system frequency control

    OpenAIRE

    Ersdal, AM; Imsland, L; Cecilio, IM; Fabozzi, D; Thornhill, NF

    2013-01-01

    16.07.14 KB Ok to add accepted version to Spiral Model predictive control (MPC) is investigated as a control method which may offer advantages in frequency control of power systems than the control methods applied today, especially in presence of increased renewable energy penetration. The MPC includes constraints on both generation amount and generation rate of change, and it is tested on a one-area system. The proposed MPC is tested against a conventional proportional-integral (PI) cont...

  16. Accuracy of some simple models for predicting particulate interception and retention in agricultural systems

    International Nuclear Information System (INIS)

    Pinder, J.E. III; McLeod, K.W.; Adriano, D.C.

    1989-01-01

    The accuracy of three radionuclide transfer models for predicting the interception and retention of airborne particles by agricultural crops was tested using Pu-bearing aerosols released to the atmosphere from nuclear fuel facilities on the U.S. Department of Energy's Savannah River Plant, near Aiken, SC. The models evaluated were: (1) NRC, the model defined in U.S. Nuclear Regulatory Guide 1.109; (2) FOOD, a model similar to the NRC model that also predicts concentrations in grains; and (3) AGNS, a model developed from the NRC model for the southeastern United States. Plutonium concentrations in vegetation and grain were predicted from measured deposition rates and compared to concentrations observed in the field. Crops included wheat, soybeans, corn and cabbage. Although predictions of the three models differed by less than a factor of 4, they showed different abilities to predict concentrations observed in the field. The NRC and FOOD models consistently underpredicted the observed Pu concentrations for vegetation. The AGNS model was a more accurate predictor of Pu concentrations for vegetation. Both the FOOD and AGNS models accurately predicted the Pu concentrations for grains

  17. Model tests and elasto-plastic finite element analysis on multicavity type PCRV

    International Nuclear Information System (INIS)

    Nojiri, Y.; Yamazaki, M.; Kotani, K.; Matsuzaki, Y.

    1978-01-01

    Multicavity type PCRV models were tested to investigate elastic stress distributions, cracking and failure mode of the models, and to determine the adequacy and relative accuracy of finite element structural analyses. The behavior of the models under pressure was investigated, and it was found that the predictions of the analyses showed a good agreement with the test results

  18. Parameters Determination of Yoshida Uemori Model Through Optimization Process of Cyclic Tension-Compression Test and V-Bending Springback

    Directory of Open Access Journals (Sweden)

    Serkan Toros

    Full Text Available Abstract In recent years, the studies on the enhancement of the prediction capability of the sheet metal forming simulations have increased remarkably. Among the used models in the finite element simulations, the yield criteria and hardening models have a great importance for the prediction of the formability and springback. The required model parameters are determined by using the several test results, i.e. tensile, compression, biaxial stretching tests (bulge test and cyclic tests (tension-compression. In this study, the Yoshida-Uemori (combined isotropic and kinematic hardening model is used to determine the performance of the springback prediction. The model parameters are determined by the optimization processes of the cyclic test by finite element simulations. However, in the study besides the cyclic tests, the model parameters are also evaluated by the optimization process of both cyclic and V-die bending simulations. The springback angle predictions with the model parameters obtained by the optimization of both cyclic and V-die bending simulations are found to mimic the experimental results in a better way than those obtained from only cyclic tests. However, the cyclic simulation results are found to be close enough to the experimental results.

  19. Modeling a Predictive Energy Equation Specific for Maintenance Hemodialysis.

    Science.gov (United States)

    Byham-Gray, Laura D; Parrott, J Scott; Peters, Emily N; Fogerite, Susan Gould; Hand, Rosa K; Ahrens, Sean; Marcus, Andrea Fleisch; Fiutem, Justin J

    2017-03-01

    Hypermetabolism is theorized in patients diagnosed with chronic kidney disease who are receiving maintenance hemodialysis (MHD). We aimed to distinguish key disease-specific determinants of resting energy expenditure to create a predictive energy equation that more precisely establishes energy needs with the intent of preventing protein-energy wasting. For this 3-year multisite cross-sectional study (N = 116), eligible participants were diagnosed with chronic kidney disease and were receiving MHD for at least 3 months. Predictors for the model included weight, sex, age, C-reactive protein (CRP), glycosylated hemoglobin, and serum creatinine. The outcome variable was measured resting energy expenditure (mREE). Regression modeling was used to generate predictive formulas and Bland-Altman analyses to evaluate accuracy. The majority were male (60.3%), black (81.0%), and non-Hispanic (76.7%), and 23% were ≥65 years old. After screening for multicollinearity, the best predictive model of mREE ( R 2 = 0.67) included weight, age, sex, and CRP. Two alternative models with acceptable predictability ( R 2 = 0.66) were derived with glycosylated hemoglobin or serum creatinine. Based on Bland-Altman analyses, the maintenance hemodialysis equation that included CRP had the best precision, with the highest proportion of participants' predicted energy expenditure classified as accurate (61.2%) and with the lowest number of individuals with underestimation or overestimation. This study confirms disease-specific factors as key determinants of mREE in patients on MHD and provides a preliminary predictive energy equation. Further prospective research is necessary to test the reliability and validity of this equation across diverse populations of patients who are receiving MHD.

  20. Testing Predictions of a Landscape Evolution Model Using the Dragon’s Back Pressure Ridge as a Natural Experiment

    Science.gov (United States)

    Perignon, M. C.; Tucker, G. E.; Hilley, G. E.; Arrowsmith, R.

    2009-12-01

    Landscape evolution models use mass transport rules to simulate the temporal development of topography over timescales too long for humans to observe. As such, these models are difficult to test using the decadal time-scale observations of topographic change that can be directly measured. In contrast, natural systems in which driving forces, boundary conditions, and timing of landscape evolution over millennial time-scales can be well constrained may be used to test the ability of mathematical models to reproduce various attributes of the observed topography. The Dragon’s Back pressure ridge, a 4km x 0.5 km x 100 m high area of elevated topography elongate parallel to the south-central San Andreas fault (SAF) in California, serves as a natural laboratory for studying how the timing and spatial distribution of uplift affects patterns of erosion and topography. Geologic mapping and geophysical studies show that, at this location, the Pacific plate is forced over a relatively stationary shallow discontinuity in the SAF, resulting in local uplift. Continued right-lateral motion along the fault results in the movement of material though the uplift zone at the SAF slip rate of 35 mm/yr. This allows for the substitution of space for time when observing topographic change, and can be used to constrain the tectonic conditions to which the surface processes responded and developed the resulting landscape. We used the CHILD model of landscape evolution to recreate the Dragon’s Back pressure ridge system in order to test the reliability of the model predictions and determine the necessary and sufficient conditions to explain the observed topography. To do this, we first ran a Monte Carlo simulation in which we varied the model inputs within a range of plausible values. We then compared the model results with LiDAR topography from the Dragon’s Back pressure ridge to determine which combinations of input parameters best reproduced the observed topography and how well it

  1. Predicting Calcium Values for Gastrointestinal Bleeding Patients in Intensive Care Unit Using Clinical Variables and Fuzzy Modeling

    Directory of Open Access Journals (Sweden)

    G Khalili-Zadeh-Mahani

    2016-07-01

    Full Text Available Introduction: Reducing unnecessary laboratory tests is an essential issue in the Intensive Care Unit. One solution for this issue is to predict the value of a laboratory test to specify the necessity of ordering the tests. The aim of this paper was to propose a clinical decision support system for predicting laboratory tests values. Calcium laboratory tests of three categories of patients, including upper and lower gastrointestinal bleeding, and unspecified hemorrhage of gastrointestinal tract, have been selected as the case studies for this research. Method: In this research, the data have been collected from MIMIC-II database. For predicting calcium laboratory values, a Fuzzy Takagi-Sugeno model is used and the input variables of the model are heart rate and previous value of calcium laboratory test. Results: The results showed that the values of calcium laboratory test for the understudy patients were predictable with an acceptable accuracy. In average, the mean absolute errors of the system for the three categories of the patients are 0.27, 0.29, and 0.28, respectively. Conclusion: In this research, using fuzzy modeling and two variables of heart rate and previous calcium laboratory values, a clinical decision support system was proposed for predicting laboratory values of three categories of patients with gastrointestinal bleeding. Using these two clinical values as input variables, the obtained results were acceptable and showed the capability of the proposed system in predicting calcium laboratory values. For achieving better results, the impact of more input variables should be studied. Since, the proposed system predicts the laboratory values instead of just predicting the necessity of the laboratory tests; it was more generalized than previous studies. So, the proposed method let the specialists make the decision depending on the condition of each patient.

  2. Progress in Developing Finite Element Models Replicating Flexural Graphite Testing

    International Nuclear Information System (INIS)

    Bratton, Robert

    2010-01-01

    This report documents the status of flexural strength evaluations from current ASTM procedures and of developing finite element models predicting the probability of failure. This work is covered under QLD REC-00030. Flexural testing procedures of the American Society for Testing and Materials (ASTM) assume a linear elastic material that has the same moduli for tension and compression. Contrary to this assumption, graphite is known to have different moduli for tension and compression. A finite element model was developed and demonstrated that accounts for the difference in moduli tension and compression. Brittle materials such as graphite exhibit significant scatter in tensile strength, so probabilistic design approaches must be used when designing components fabricated from brittle materials. ASTM procedures predicting probability of failure in ceramics were compared to methods from the current version of the ASME graphite core components rules predicting probability of failure. Using the ASTM procedures yields failure curves at lower applied forces than the ASME rules. A journal paper was published in the Journal of Nuclear Engineering and Design exploring the statistical models of fracture in graphite.

  3. Computerized heat balance models to predict performance of operating nuclear power plants

    International Nuclear Information System (INIS)

    Breeding, C.L.; Carter, J.C.; Schaefer, R.C.

    1983-01-01

    The use of computerized heat balance models has greatly enhanced the decision making ability of TVA's Division of Nuclear Power. These models are utilized to predict the effects of various operating modes and to analyze changes in plant performance resulting from turbine cycle equipment modifications with greater speed and accuracy than was possible before. Computer models have been successfully used to optimize plant output by predicting the effects of abnormal condenser circulating water conditions. They were utilized to predict the degradation in performance resulting from installation of a baffle plate assembly to replace damaged low-pressure blading, thereby providing timely information allowing an optimal economic judgement as to when to replace the blading. Future use will be for routine performance test analysis. This paper presents the benefits of utility use of computerized heat balance models

  4. [Prediction of 137Cs behaviour in the soil-plant system in the territory of Semipalatinsk test site].

    Science.gov (United States)

    Spiridonov, S I; Mukusheva, M K; Gontarenko, I A; Fesenko, S V; Baranov, S A

    2005-01-01

    A mathematical model of 137Cs behaviour in the soil-plant system is presented. The model has been parameterized for the area adjacent to the testing area Ground Zero of the Semipalatinsk Test Site. The model describes the main processes responsible for the changes in 137Cs content in the soil solution and, thereby, dynamics of the radionuclide uptake by vegetation. The results are taken from predictive and retrospective calculations that reflect the dynamics of 137Cs distribution by species in soil after nuclear explosions. The importance of factors governing 137Cs accumulation in plants within the STS area is assessed. The analysis of sensitivity of the output model variable to changes in its parameters revealed that the key soil properties significantly influence the results of prediction of 137Cs content in plants.

  5. The IEA Annex 20 Two-Dimensional Benchmark Test for CFD Predictions

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Rong, Li; Cortes, Ines Olmedo

    2010-01-01

    predictions both for isothermal flow and for nonisothermal flow. The benchmark is defined on a web page, which also shows about 50 different benchmark tests with studies of e.g. grid dependence, numerical schemes, different source codes, different turbulence models, RANS or LES, different turbulence levels...... in a supply opening, study of local emission and study of airborne chemical reactions. Therefore the web page is also a collection of information which describes the importance of the different elements of a CFD procedure. The benchmark is originally developed for test of two-dimensional flow, but the paper...

  6. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.

    2017-01-01

    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus...

  7. Recent advances using rodent models for predicting human allergenicity

    International Nuclear Information System (INIS)

    Knippels, Leon M.J.; Penninks, Andre H.

    2005-01-01

    The potential allergenicity of newly introduced proteins in genetically engineered foods has become an important safety evaluation issue. However, to evaluate the potential allergenicity and the potency of new proteins in our food, there are still no widely accepted and reliable test systems. The best-known allergy assessment proposal for foods derived from genetically engineered plants was the careful stepwise process presented in the so-called ILSI/IFBC decision tree. A revision of this decision tree strategy was proposed by a FAO/WHO expert consultation. As prediction of the sensitizing potential of the novel introduced protein based on animal testing was considered to be very important, animal models were introduced as one of the new test items, despite the fact that non of the currently studied models has been widely accepted and validated yet. In this paper, recent results are summarized of promising models developed in rat and mouse

  8. Computerized mathematical model for prediction of resin/fiber composite properties

    International Nuclear Information System (INIS)

    Lowe, K.A.

    1985-01-01

    A mathematical model has been developed for the design and optimization of resin formulations. The behavior of a fiber-reinforced cured resin matrix can be predicted from constituent properties of the formulation and fiber when component interaction is taken into account. A computer implementation of the mathematical model has been coded to simulate resin/fiber response and generate expected values for any definable properties of the composite. The algorithm is based on multistage regression techniques and the manipulation of n-order matrices. Excellent correlation between actual test values and predicted values has been observed for physical, mechanical, and qualitative properties of resin/fiber composites. Both experimental and commercial resin systems with various fiber reinforcements have been successfully characterized by the model. 6 references, 3 figures, 2 tables

  9. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  10. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    An extensive set of measurements funded by the EU project Large Scale Facilities Program has been carried out on a model of a jack-up rig at the Danish Hydraulic Institute. The test serieswere conducted by MSC and include determination of base shears and overturning moments in both regular...... methods.In the comparison between the model test results and the theoretical predictions, thehydro-dynamic damping proves to be the most important uncertain parameter. It is shown thata relative large hydrodynamic damping must be assumed in the theoretical calculations in orderto predict the measured...

  11. Predictive multiscale computational model of shoe-floor coefficient of friction.

    Science.gov (United States)

    Moghaddam, Seyed Reza M; Acharya, Arjun; Redfern, Mark S; Beschorner, Kurt E

    2018-01-03

    Understanding the frictional interactions between the shoe and floor during walking is critical to prevention of slips and falls, particularly when contaminants are present. A multiscale finite element model of shoe-floor-contaminant friction was developed that takes into account the surface and material characteristics of the shoe and flooring in microscopic and macroscopic scales. The model calculates shoe-floor coefficient of friction (COF) in boundary lubrication regime where effects of adhesion friction and hydrodynamic pressures are negligible. The validity of model outputs was assessed by comparing model predictions to the experimental results from mechanical COF testing. The multiscale model estimates were linearly related to the experimental results (p < 0.0001). The model predicted 73% of variability in experimentally-measured shoe-floor-contaminant COF. The results demonstrate the potential of multiscale finite element modeling in aiding slip-resistant shoe and flooring design and reducing slip and fall injuries. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Predicting photosynthesis and transpiration responses to ozone: decoupling modeled photosynthesis and stomatal conductance

    Directory of Open Access Journals (Sweden)

    D. Lombardozzi

    2012-08-01

    Full Text Available Plants exchange greenhouse gases carbon dioxide and water with the atmosphere through the processes of photosynthesis and transpiration, making them essential in climate regulation. Carbon dioxide and water exchange are typically coupled through the control of stomatal conductance, and the parameterization in many models often predict conductance based on photosynthesis values. Some environmental conditions, like exposure to high ozone (O3 concentrations, alter photosynthesis independent of stomatal conductance, so models that couple these processes cannot accurately predict both. The goals of this study were to test direct and indirect photosynthesis and stomatal conductance modifications based on O3 damage to tulip poplar (Liriodendron tulipifera in a coupled Farquhar/Ball-Berry model. The same modifications were then tested in the Community Land Model (CLM to determine the impacts on gross primary productivity (GPP and transpiration at a constant O3 concentration of 100 parts per billion (ppb. Modifying the Vcmax parameter and directly modifying stomatal conductance best predicts photosynthesis and stomatal conductance responses to chronic O3 over a range of environmental conditions. On a global scale, directly modifying conductance reduces the effect of O3 on both transpiration and GPP compared to indirectly modifying conductance, particularly in the tropics. The results of this study suggest that independently modifying stomatal conductance can improve the ability of models to predict hydrologic cycling, and therefore improve future climate predictions.

  13. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  14. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  15. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  16. Novel Predictive Model of the Debonding Strength for Masonry Members Retrofitted with FRP

    Directory of Open Access Journals (Sweden)

    Iman Mansouri

    2016-11-01

    Full Text Available Strengthening of masonry members using externally bonded (EB fiber-reinforced polymer (FRP composites has become a famous structural strengthening method over the past decade due to the popular advantages of FRP composites, including their high strength-to-weight ratio and excellent corrosion resistance. In this study, gene expression programming (GEP, as a novel tool, has been used to predict the debonding strength of retrofitted masonry members. The predictions of the new debonding resistance model, as well as several other models, are evaluated by comparing their estimates with experimental results of a large test database. The results indicate that the new model has the best efficiency among the models examined and represents an improvement to other models. The root mean square errors (RMSE of the best empirical Kashyap model in training and test data were, respectively, reduced by 51.7% and 41.3% using the GEP model in estimating debonding strength.

  17. Protein-Based Urine Test Predicts Kidney Transplant Outcomes

    Science.gov (United States)

    ... News Releases News Release Thursday, August 22, 2013 Protein-based urine test predicts kidney transplant outcomes NIH- ... supporting development of noninvasive tests. Levels of a protein in the urine of kidney transplant recipients can ...

  18. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  19. Prediction of phenanthrene uptake by plants with a partition-limited model

    International Nuclear Information System (INIS)

    Zhu, Lizhong; Gao, Yanzheng

    2004-01-01

    The performance of a partition-limited model on prediction of phenanthrene uptake by a wide variety of plant species was evaluated using a greenhouse study. The model predictions of root or shoot concentrations for tested plant species were all within an order of magnitude of the observed values. Modeled root concentrations appeared to be more accurate than modeled shoot concentrations. The differences of simulated and experimented concentrations of phenanthrene in roots and shoots of three representative plant species, including ryegrass, flowering Chinese cabbage, and three-colored amaranth, were less than 81% for roots and 103% for shoots. Results are promising in that the α pt values of the partition-limited model for root uptake of phenanthrene correlate well with root lipid contents. Additionally, a significantly positive correlation is also observed between root concentration factors (RCFs, defined as the ratio of contaminant concentrations in root and in soil on a dry weight basis) of phenanthrene and root lipid contents. Results from this study suggest that the partition-limited model may have potential applications for predicting the plant PAH concentration in contaminated sites

  20. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  1. Model testing for the remediation assessment of a radium contaminated site in Olen, Belgium

    International Nuclear Information System (INIS)

    Sweeck, Lieve; Kanyar, Bela; Krajewski, Pawel; Kryshev, Alexander; Lietava, Peter; Nenyei, Arpad; Sazykina, Tatiana; Yu, Charley; Zeevaert, Theo

    2005-01-01

    Environmental assessment models are used as decision-aiding tools in the selection of remediation options for radioactively contaminated sites. In most cases, the effectiveness of the remedial actions in terms of dose savings cannot be demonstrated directly, but can be established with the help of environmental assessment models, through the assessment of future radiological impacts. It should be emphasized that, given the complexity of the processes involved and our current understanding of how they operate, these models are simplified descriptions of the behaviour of radionuclides in the environment and therefore imperfect. One way of testing and improving the reliability of the models is to compare their predictions with real data and/or the predictions of other models. Within the framework of the Remediation Assessment Working Group (RAWG) of the BIOMASS (BIOsphere Modelling and ASSessment) programme coordinated by IAEA, two scenarios were constructed and applied to test the reliability of environmental assessment models when remedial actions are involved. As a test site, an area of approximately 100 ha contaminated by the discharges of an old radium extraction plant in Olen (Belgium) has been considered. In the first scenario, a real situation was evaluated and model predictions were compared with measured data. In the second scenario the model predictions for specific hypothetical but realistic situations were compared. Most of the biosphere models were not developed to assess the performance of remedial actions and had to be modified for this purpose. It was demonstrated clearly that the modeller's experience and familiarity with the mathematical model, the site and with the scenario play a very important role in the outcome of the model calculations. More model testing studies, preferably for real situations, are needed in order to improve the models and modelling methods and to expand the areas in which the models are applicable

  2. Fast integration-based prediction bands for ordinary differential equation models.

    Science.gov (United States)

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  3. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  4. New models of droplet deposition and entrainment for prediction of CHF in cylindrical rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Haibin, E-mail: hb-zhang@xjtu.edu.cn [School of Chemical Engineering and Technology, Xi’an Jiaotong University, Xi’an 710049 (China); Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom); Hewitt, G.F. [Department of Chemical Engineering, Imperial College, London SW7 2BY (United Kingdom)

    2016-08-15

    Highlights: • New models of droplet deposition and entrainment in rod bundles is developed. • A new phenomenological model to predict the CHF in rod bundles is described. • The present model is well able to predict CHF in rod bundles. - Abstract: In this paper, we present a new set of model of droplet deposition and entrainment in cylindrical rod bundles based on the previously proposed model for annuli (effectively a “one-rod” bundle) (2016a). These models make it possible to evaluate the differences of the rates of droplet deposition and entrainment for the respective rods and for the outer tube by taking into account the geometrical characteristics of the rod bundles. Using these models, a phenomenological model to predict the CHF (critical heat flux) for upward annular flow in vertical rod bundles is described. The performance of the model is tested against the experimental data of Becker et al. (1964) for CHF in 3-rod and 7-rod bundles. These data include tests in which only the rods were heated and data for simultaneous uniform and non-uniform heating of the rods and the outer tube. It was shown that the predicted CHFs by the present model agree well with the experimental data and with the experimental observation that dryout occurred first on the outer rods in 7-rod bundles. It is expected that the methodology used will be generally applicable in the prediction of CHF in rod bundles.

  5. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  6. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  7. Incidence of atrial fibrillation and its risk prediction model based on a prospective urban Han Chinese cohort.

    Science.gov (United States)

    Ding, L; Li, J; Wang, C; Li, X; Su, Q; Zhang, G; Xue, F

    2017-09-01

    Prediction models of atrial fibrillation (AF) have been developed; however, there was no AF prediction model validated in Chinese population. Therefore, we aimed to investigate the incidence of AF in urban Han Chinese health check-up population, as well as to develop AF prediction models using behavioral, anthropometric, biochemical, electrocardiogram (ECG) markers, as well as visit-to-visit variability (VVV) in blood pressures available in the routine health check-up. A total of 33 186 participants aged 45-85 years and free of AF at baseline were included in this cohort, to follow up for incident AF with an annually routine health check-up. Cox regression models were used to develop AF prediction model and 10-fold cross-validation was used to test the discriminatory accuracy of prediction model. We developed three prediction models, with age, sex, history of coronary heart disease (CHD), hypertension as predictors for simple model, with left high-amplitude waves, premature beats added for ECG model, and with age, sex, history of CHD and VVV in systolic and diabolic blood pressures as predictors for VVV model, to estimate risk of incident AF. The calibration of our models ranged from 1.001 to 1.004 (P for Hosmer Lemeshow test >0.05). The area under receiver operator characteristics curve were 78%, 80% and 82%, respectively, for predicting risk of AF. In conclusion, we have identified predictors of incident AF and developed prediction models for AF with variables readily available in routine health check-up.

  8. External validation of structure-biodegradation relationship (SBR) models for predicting the biodegradability of xenobiotics.

    Science.gov (United States)

    Devillers, J; Pandard, P; Richard, B

    2013-01-01

    Biodegradation is an important mechanism for eliminating xenobiotics by biotransforming them into simple organic and inorganic products. Faced with the ever growing number of chemicals available on the market, structure-biodegradation relationship (SBR) and quantitative structure-biodegradation relationship (QSBR) models are increasingly used as surrogates of the biodegradation tests. Such models have great potential for a quick and cheap estimation of the biodegradation potential of chemicals. The Estimation Programs Interface (EPI) Suite™ includes different models for predicting the potential aerobic biodegradability of organic substances. They are based on different endpoints, methodologies and/or statistical approaches. Among them, Biowin 5 and 6 appeared the most robust, being derived from the largest biodegradation database with results obtained only from the Ministry of International Trade and Industry (MITI) test. The aim of this study was to assess the predictive performances of these two models from a set of 356 chemicals extracted from notification dossiers including compatible biodegradation data. Another set of molecules with no more than four carbon atoms and substituted by various heteroatoms and/or functional groups was also embodied in the validation exercise. Comparisons were made with the predictions obtained with START (Structural Alerts for Reactivity in Toxtree). Biowin 5 and Biowin 6 gave satisfactorily prediction results except for the prediction of readily degradable chemicals. A consensus model built with Biowin 1 allowed the diminution of this tendency.

  9. From GenBank to GBIF: Phylogeny-Based Predictive Niche Modeling Tests Accuracy of Taxonomic Identifications in Large Occurrence Data Repositories.

    Science.gov (United States)

    Smith, B Eugene; Johnston, Mark K; Lücking, Robert

    2016-01-01

    Accuracy of taxonomic identifications is crucial to data quality in online repositories of species occurrence data, such as the Global Biodiversity Information Facility (GBIF), which have accumulated several hundred million records over the past 15 years. These data serve as basis for large scale analyses of macroecological and biogeographic patterns and to document environmental changes over time. However, taxonomic identifications are often unreliable, especially for non-vascular plants and fungi including lichens, which may lack critical revisions of voucher specimens. Due to the scale of the problem, restudy of millions of collections is unrealistic and other strategies are needed. Here we propose to use verified, georeferenced occurrence data of a given species to apply predictive niche modeling that can then be used to evaluate unverified occurrences of that species. Selecting the charismatic lichen fungus, Usnea longissima, as a case study, we used georeferenced occurrence records based on sequenced specimens to model its predicted niche. Our results suggest that the target species is largely restricted to a narrow range of boreal and temperate forest in the Northern Hemisphere and that occurrence records in GBIF from tropical regions and the Southern Hemisphere do not represent this taxon, a prediction tested by comparison with taxonomic revisions of Usnea for these regions. As a novel approach, we employed Principal Component Analysis on the environmental grid data used for predictive modeling to visualize potential ecogeographical barriers for the target species; we found that tropical regions conform a strong barrier, explaining why potential niches in the Southern Hemisphere were not colonized by Usnea longissima and instead by morphologically similar species. This approach is an example of how data from two of the most important biodiversity repositories, GenBank and GBIF, can be effectively combined to remotely address the problem of inaccuracy of

  10. Comparison of Model Predictions and Laboratory Observations of Transgene Frequencies in Continuously-Breeding Mosquito Populations

    Science.gov (United States)

    Valerio, Laura; North, Ace; Collins, C. Matilda; Mumford, John D.; Facchinelli, Luca; Spaccapelo, Roberta; Benedict, Mark Q.

    2016-01-01

    The persistence of transgenes in the environment is a consideration in risk assessments of transgenic organisms. Combining mathematical models that predict the frequency of transgenes and experimental demonstrations can validate the model predictions, or can detect significant biological deviations that were neither apparent nor included as model parameters. In order to assess the correlation between predictions and observations, models were constructed to estimate the frequency of a transgene causing male sexual sterility in simulated populations of a malaria mosquito Anopheles gambiae that were seeded with transgenic females at various proportions. Concurrently, overlapping-generation laboratory populations similar to those being modeled were initialized with various starting transgene proportions, and the subsequent proportions of transgenic individuals in populations were determined weekly until the transgene disappeared. The specific transgene being tested contained a homing endonuclease gene expressed in testes, I-PpoI, that cleaves the ribosomal DNA and results in complete male sexual sterility with no effect on female fertility. The transgene was observed to disappear more rapidly than the model predicted in all cases. The period before ovipositions that contained no transgenic progeny ranged from as little as three weeks after cage initiation to as long as 11 weeks. PMID:27669312

  11. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  12. Degradation model and application in life prediction of rotating-mechanism

    International Nuclear Information System (INIS)

    Zhou Yuhui

    2009-01-01

    The degradation data can provide additional information beyond that provided by the failure observations, both sets of observations need to be considered when doing inference on the statistical parameters of the product and system lifetime distributions. By the degradation model showing the wear out failure, the predicted results of mechanism life is more accurate. Strength is one of the important capabilities of the rotating mechanism. In this paper, the degradation data of strength are described as a stochastic process model. Accelerated tests expose the products to greater environmental stress levels so that we can obtain lifetime and degradation measurements in a more timely fashion. Using the Best Linear Unbiased Estimation (BLUE) Method, the parameters under the degradation path were estimated from the accelerated life test (ALT) data of the rotating mechanism. Based on solving the singularity of degradation equation, at any time the reliability is estimated by the using the strength-stress interference theory. So we can predict the life of the rotating mechanism. (authors)

  13. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  14. Prediction model for carbonation depth of concrete subjected to freezing-thawing cycles

    Science.gov (United States)

    Xiao, Qian Hui; Li, Qiang; Guan, Xiao; Xian Zou, Ying

    2018-03-01

    Through the indoor simulation test of the concrete durability under the coupling effect of freezing-thawing and carbonation, the variation regularity of concrete neutralization depth under freezing-thawing and carbonation was obtained. Based on concrete carbonation mechanism, the relationship between the air diffusion coefficient and porosity in concrete was analyzed and the calculation method of porosity in Portland cement concrete and fly ash cement concrete was investigated, considering the influence of the freezing-thawing damage on the concrete diffusion coefficient. Finally, a prediction model of carbonation depth of concrete under freezing-thawing circumstance was established. The results obtained using this prediction model agreed well with the experimental test results, and provided a theoretical reference and basis for the concrete durability analysis under multi-factor environments.

  15. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  16. Clinical Prediction Model and Tool for Assessing Risk of Persistent Pain After Breast Cancer Surgery

    DEFF Research Database (Denmark)

    Meretoja, Tuomo J; Andersen, Kenneth Geving; Bruce, Julie

    2017-01-01

    are missing. The aim was to develop a clinically applicable risk prediction tool. Methods The prediction models were developed and tested using three prospective data sets from Finland (n = 860), Denmark (n = 453), and Scotland (n = 231). Prediction models for persistent pain of moderate to severe intensity......), high body mass index ( P = .039), axillary lymph node dissection ( P = .008), and more severe acute postoperative pain intensity at the seventh postoperative day ( P = .003) predicted persistent pain in the final prediction model, which performed well in the Danish (ROC-AUC, 0.739) and Scottish (ROC......-AUC, 0.740) cohorts. At the 20% risk level, the model had 32.8% and 47.4% sensitivity and 94.4% and 82.4% specificity in the Danish and Scottish cohorts, respectively. Conclusion Our validated prediction models and an online risk calculator provide clinicians and researchers with a simple tool to screen...

  17. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  18. Hand Posture Prediction Using Neural Networks within a Biomechanical Model

    Directory of Open Access Journals (Sweden)

    Marta C. Mora

    2012-10-01

    Full Text Available This paper proposes the use of artificial neural networks (ANNs in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals. Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device.

  19. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  20. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    Science.gov (United States)

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  1. Analysis of Phenix End-of-Life asymmetry test with multi-dimensional pool modeling of MARS-LMR code

    International Nuclear Information System (INIS)

    Jeong, H.-Y.; Ha, K.-S.; Choi, C.-W.; Park, M.-G.

    2015-01-01

    Highlights: • Pool behaviors under asymmetrical condition in an SFR were evaluated with MARS-LMR. • The Phenix asymmetry test was analyzed one-dimensionally and multi-dimensionally. • One-dimensional modeling has limitation to predict the cold pool temperature. • Multi-dimensional modeling shows improved prediction of stratification and mixing. - Abstract: The understanding of complicated pool behaviors and its modeling is essential for the design and safety analysis of a pool-type Sodium-cooled Fast Reactor. One of the remarkable recent efforts on the study of pool thermal–hydraulic behaviors is the asymmetrical test performed as a part of Phenix End-of-Life tests by the CEA. To evaluate the performance of MARS-LMR code, which is a key system analysis tool for the design of an SFR in Korea, in the prediction of thermal hydraulic behaviors during an asymmetrical condition, the Phenix asymmetry test is analyzed with MARS-LMR in the present study. Pool regions are modeled with two different approaches, one-dimensional modeling and multi-dimensional one, and the prediction results are analyzed to identify the appropriateness of each modeling method. The prediction with one-dimensional pool modeling shows a large deviation from the measured data at the early stage of the test, which suggests limitations to describe the complicated thermal–hydraulic phenomena. When the pool regions are modeled multi-dimensionally, the prediction gives improved results quite a bit. This improvement is explained by the enhanced modeling of pool mixing with the multi-dimensional modeling. On the basis of the results from the present study, it is concluded that an accurate modeling of pool thermal–hydraulics is a prerequisite for the evaluation of design performance and safety margin quantification in the future SFR developments

  2. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  3. Predicting failure response of spot welded joints using recent extensions to the Gurson model

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau

    2010-01-01

    The plug failure modes of resistance spot welded shear-lab and cross-tension test specimens are studied, using recent extensions to the Gurson model. A comparison of the predicted mechanical response is presented when using either: (i) the Gurson-Tvergaard-Needleman model (GTN-model), (ii...... is presented. The models are applied to predict failure of specimens containing a fully intact weld nugget as well as a partly removed weld nugget to address the problems of shrinkage voids or larger weld defects. All analysis are carried out by full 3D finite element modelling....

  4. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  5. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  6. Anatomical Cystocele Recurrence: Development and Internal Validation of a Prediction Model.

    Science.gov (United States)

    Vergeldt, Tineke F M; van Kuijk, Sander M J; Notten, Kim J B; Kluivers, Kirsten B; Weemhoff, Mirjam

    2016-02-01

    To develop a prediction model that estimates the risk of anatomical cystocele recurrence after surgery. The databases of two multicenter prospective cohort studies were combined, and we performed a retrospective secondary analysis of these data. Women undergoing an anterior colporrhaphy without mesh materials and without previous pelvic organ prolapse (POP) surgery filled in a questionnaire, underwent translabial three-dimensional ultrasonography, and underwent staging of POP preoperatively and postoperatively. We developed a prediction model using multivariable logistic regression and internally validated it using standard bootstrapping techniques. The performance of the prediction model was assessed by computing indices of overall performance, discriminative ability, calibration, and its clinical utility by computing test characteristics. Of 287 included women, 149 (51.9%) had anatomical cystocele recurrence. Factors included in the prediction model were assisted delivery, preoperative cystocele stage, number of compartments involved, major levator ani muscle defects, and levator hiatal area during Valsalva. Potential predictors that were excluded after backward elimination because of high P values were age, body mass index, number of vaginal deliveries, and family history of POP. The shrinkage factor resulting from the bootstrap procedure was 0.91. After correction for optimism, Nagelkerke's R and the Brier score were 0.15 and 0.22, respectively. This indicates satisfactory model fit. The area under the receiver operating characteristic curve of the prediction model was 71.6% (95% confidence interval 65.7-77.5). After correction for optimism, the area under the receiver operating characteristic curve was 69.7%. This prediction model, including history of assisted delivery, preoperative stage, number of compartments, levator defects, and levator hiatus, estimates the risk of anatomical cystocele recurrence.

  7. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  8. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  9. Prediction of the semiscale blowdown heat transfer test S-02-8 (NRC Standard Problem Five)

    International Nuclear Information System (INIS)

    Fujita, N.; Irani, A.A.; Mecham, D.C.; Sawtelle, G.R.; Moore, K.V.

    1976-10-01

    Standard Problem Five was the prediction of test S-02-8 in the Semiscale Mod-1 experimental program. The Semiscale System is an electrically heated experiment designed to produce data on system performance typical of PWR thermal-hydraulic behavior. The RELAP4 program used for these analyses is a digital computer program developed to predict the thermal-hydraulic behavior of experimental systems and water-cooled nuclear reactors subjected to postulated transients. The RELAP4 predictions of Standard Problem 5 were in good overall agreement with the measured hydraulic data. Fortunately, sufficient experience has been gained with the semiscale break configuration and the critical flow models in RELAP4 to accurately predict the break flow and, hence the overall system depressurization. Generally, the hydraulic predictions are quite good in regions where homogeneity existed. Where separation effects occurred, predictions are not as good, and the data oscillations and error bands are larger. A large discrepancy existed among the measured heater rod temperature data as well as between these data and predicted values. Several potential causes for these differences were considered, and several post test analyses were performed in order to evaluate the discrepancies

  10. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  11. A new constitutive model for prediction of impact rates response of polypropylene

    Directory of Open Access Journals (Sweden)

    Buckley C.P.

    2012-08-01

    Full Text Available This paper proposes a new constitutive model for predicting the impact rates response of polypropylene. Impact rates, as used here, refer to strain rates greater than 1000 1/s. The model is a physically based, three-dimensional constitutive model which incorporates the contributions of the amorphous, crystalline, pseudo-amorphous and entanglement networks to the constitutive response of polypropylene. The model mathematics is based on the well-known Glass-Rubber model originally developed for glassy polymers but the arguments have herein been extended to semi-crystalline polymers. In order to predict the impact rates behaviour of polypropylene, the model exploits the well-known framework of multiple processes yielding of polymers. This work argues that two dominant viscoelastic relaxation processes – the alpha- and beta-processes – can be associated with the yield responses of polypropylene observed at low-rate-dominant and impact-rates dominant loading regimes. Compression test data on polypropylene have been used to validate the model. The study has found that the model predicts quite well the experimentally observed nonlinear rate-dependent impact response of polypropylene.

  12. Testing the atmospheric dispersion model of CSA N288.1 with site-specific data

    International Nuclear Information System (INIS)

    Chouhan, S.L.; Davis, P.A.

    2001-01-01

    The atmospheric dispersion component of CSA Standard N288. 1, which provides guidelines for calculating derived release limits, has been tested. Long-term average concentrations of tritium in air were predicted using site-specific release rates and meteorological data and compared with measured concentrations at 43 monitoring sites at all CANDU stations in Canada. The predictions correlate well with the observations but were found to be conservative, overestimating by about 50% on average. The model overpredicted 84% of the time, with the highest prediction lying a factor of 5.5 above the corresponding observation. The model underpredicted the remaining 16% of the time, with the lowest prediction about one-half of the corresponding measurement. Possible explanations for this bias are discussed but no single reason appears capable of accounting for the discrepancy. Rather, the tendency to overprediction seems to result from the cumulative effects of a number of small conservatisms in the model. The model predictions were slightly better when site-specific meteorological data were used in the calculations in place of the default data of N288.1. Some large discrepancies between predictions and observations at specific monitoring sites suggest that it is the measurements rather than the model that are at fault. The testing has therefore provided a check on the observations as well as on the model. Recommendations on model use and data collection are made to improve the level of agreement between predictions and observations in the future. (author)

  13. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  14. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    Science.gov (United States)

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  15. Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  16. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  17. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  18. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models

    Directory of Open Access Journals (Sweden)

    Bao Sheng Loe

    2018-04-01

    Full Text Available This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG. The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1 short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s (LLTM were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties.

  19. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  20. Calibration plots for risk prediction models in the presence of competing risks.

    Science.gov (United States)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Quality by control: Towards model predictive control of mammalian cell culture bioprocesses.

    Science.gov (United States)

    Sommeregger, Wolfgang; Sissolak, Bernhard; Kandra, Kulwant; von Stosch, Moritz; Mayer, Martin; Striedner, Gerald

    2017-07-01

    The industrial production of complex biopharmaceuticals using recombinant mammalian cell lines is still mainly built on a quality by testing approach, which is represented by fixed process conditions and extensive testing of the end-product. In 2004 the FDA launched the process analytical technology initiative, aiming to guide the industry towards advanced process monitoring and better understanding of how critical process parameters affect the critical quality attributes. Implementation of process analytical technology into the bio-production process enables moving from the quality b