WorldWideScience

Sample records for model predictions measurements

  1. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  3. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  4. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    , we simplify state prediction for the MPC. Consequently, the control problem of the nonlinear system is simplified into a quadratic programming. We consider uncertainty in the wind propagation time, which is the traveling time of wind from the LIDAR measurement point to the rotor. An algorithm based......The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...... on wind speed estimation and measurements from the LIDAR is devised to find an estimate of the delay and compensate for it before it is used in the controller. Comparisons between the MPC with error compensation, the MPC without error compensation and an MPC with re-linearization at each sample point...

  5. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    Science.gov (United States)

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  6. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    Science.gov (United States)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  7. Ion current prediction model considering columnar recombination in alpha radioactivity measurement using ionized air transportation

    International Nuclear Information System (INIS)

    Naito, Susumu; Hirata, Yosuke; Izumi, Mikio; Sano, Akira; Miyamoto, Yasuaki; Aoyama, Yoshio; Yamaguchi, Hiromi

    2007-01-01

    We present a reinforced ion current prediction model in alpha radioactivity measurement using ionized air transportation. Although our previous model explained the qualitative trend of the measured ion current values, the absolute values of the theoretical curves were about two times as large as the measured values. In order to accurately predict the measured values, we reinforced our model by considering columnar recombination and turbulent diffusion, which affects columnar recombination. Our new model explained the considerable ion loss in the early stage of ion diffusion and narrowed the gap between the theoretical and measured values. The model also predicted suppression of ion loss due to columnar recombination by spraying a high-speed air flow near a contaminated surface. This suppression was experimentally investigated and confirmed. In conclusion, we quantitatively clarified the theoretical relation between alpha radioactivity and ion current in laminar flow and turbulent pipe flow. (author)

  8. Wideband impedance measurements and modeling of DC motors for EMI predictions

    NARCIS (Netherlands)

    Diouf, F.; Leferink, Frank Bernardus Johannes; Duval, Fabrice; Bensetti, Mohamed

    2015-01-01

    In electromagnetic interference prediction, dc motors are usually modeled as a source and a series impedance. Previous researches only include the impedance of the armature, while neglecting the effect of the motor's rotation. This paper aims at measuring and modeling the wideband impedance of a dc

  9. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  10. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  11. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  12. ISOL yield predictions from holdup-time measurements

    International Nuclear Information System (INIS)

    Spejewski, Eugene H.; Carter, H Kennon; Mervin, Brenden T.; Prettyman, Emily S.; Kronenberg, Andreas; Stracener, Daniel W

    2008-01-01

    A formalism based on a simple model is derived to predict ISOL yields for all isotopes of a given element based on a holdup-time measurement of a single isotope of that element. Model predictions, based on parameters obtained from holdup-time measurements, are compared to independently-measured experimental values

  13. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    An extensive set of measurements funded by the EU project Large Scale Facilities Program has been carried out on a model of a jack-up rig at the Danish Hydraulic Institute. The test serieswere conducted by MSC and include determination of base shears and overturning moments in both regular...... methods.In the comparison between the model test results and the theoretical predictions, thehydro-dynamic damping proves to be the most important uncertain parameter. It is shown thata relative large hydrodynamic damping must be assumed in the theoretical calculations in orderto predict the measured...

  14. Ion mobilities in diatomic gases: measurement versus prediction with non-specular scattering models.

    Science.gov (United States)

    Larriba, Carlos; Hogan, Christopher J

    2013-05-16

    Ion/electrical mobility measurements of nanoparticles and polyatomic ions are typically linked to particle/ion physical properties through either application of the Stokes-Millikan relationship or comparison to mobilities predicted from polyatomic models, which assume that gas molecules scatter specularly and elastically from rigid structural models. However, there is a discrepancy between these approaches; when specular, elastic scattering models (i.e., elastic-hard-sphere scattering, EHSS) are applied to polyatomic models of nanometer-scale ions with finite-sized impinging gas molecules, predictions are in substantial disagreement with the Stokes-Millikan equation. To rectify this discrepancy, we developed and tested a new approach for mobility calculations using polyatomic models in which non-specular (diffuse) and inelastic gas-molecule scattering is considered. Two distinct semiempirical models of gas-molecule scattering from particle surfaces were considered. In the first, which has been traditionally invoked in the study of aerosol nanoparticles, 91% of collisions are diffuse and thermally accommodating, and 9% are specular and elastic. In the second, all collisions are considered to be diffuse and accommodating, but the average speed of the gas molecules reemitted from a particle surface is 8% lower than the mean thermal speed at the particle temperature. Both scattering models attempt to mimic exchange between translational, vibrational, and rotational modes of energy during collision, as would be expected during collision between a nonmonoatomic gas molecule and a nonfrozen particle surface. The mobility calculation procedure was applied considering both hard-sphere potentials between gas molecules and the atoms within a particle and the long-range ion-induced dipole (polarization) potential. Predictions were compared to previous measurements in air near room temperature of multiply charged poly(ethylene glycol) (PEG) ions, which range in morphology from

  15. Assessing the performance of prediction models: a framework for traditional and novel measures

    DEFF Research Database (Denmark)

    Steyerberg, Ewout W; Vickers, Andrew J; Cook, Nancy R

    2010-01-01

    The performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver...

  16. Assessing the performance of prediction models: A framework for traditional and novel measures

    NARCIS (Netherlands)

    E.W. Steyerberg (Ewout); A.J. Vickers (Andrew); N.R. Cook (Nancy); T.A. Gerds (Thomas); M. Gonen (Mithat); N. Obuchowski (Nancy); M. Pencina (Michael); M.W. Kattan (Michael)

    2010-01-01

    textabstractThe performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the

  17. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    Science.gov (United States)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  18. Comparison of Echo 7 field line length measurements to magnetospheric model predictions

    International Nuclear Information System (INIS)

    Nemzek, R.J.; Winckler, J.R.; Malcolm, P.R.

    1992-01-01

    The Echo 7 sounding rocket experiment injected electron beams on central tail field lines near L = 6.5. Numerous injections returned to the payload as conjugate echoes after mirroring in the southern hemisphere. The authors compare field line lengths calculated from measured conjugate echo bounce times and energies to predictions made by integrating electron trajectories through various magnetospheric models: the Olson-Pfitzer Quiet and Dynamic models and the Tsyganenko-Usmanov model. Although Kp at launch was 3-, quiet time magnetic models est fit the echo measurements. Geosynchronous satellite magnetometer measurements near the Echo 7 field lies during the flight were best modeled by the Olson-Pfitzer Dynamic Model and the Tsyganenko-Usmanov model for Kp = 3. The discrepancy between the models that best fit the Echo 7 data and those that fit the satellite data was most likely due to uncertainties in the small-scale configuration of the magnetospheric models. The field line length measured by the conjugate echoes showed some temporal variation in the magnetic field, also indicated by the satellite magnetometers. This demonstrates the utility an Echo-style experiment could have in substorm studies

  19. Prediction impact curve is a new measure integrating intervention effects in the evaluation of risk models.

    Science.gov (United States)

    Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W

    2016-01-01

    We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Measurements and IRI Model Predictions During the Recent Solar Minimum

    Science.gov (United States)

    Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.

    2012-01-01

    Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.

  1. Comparison of the predictions of two road dust emission models with the measurements of a mobile van

    Science.gov (United States)

    Kauhaniemi, M.; Stojiljkovic, A.; Pirjola, L.; Karppinen, A.; Härkönen, J.; Kupiainen, K.; Kangas, L.; Aarnio, M. A.; Omstedt, G.; Denby, B. R.; Kukkonen, J.

    2014-09-01

    The predictions of two road dust suspension emission models were compared with the on-site mobile measurements of suspension emission factors. Such a quantitative comparison has not previously been reported in the reviewed literature. The models used were the Nordic collaboration model NORTRIP (NOn-exhaust Road TRaffic Induced Particle emissions) and the Swedish-Finnish FORE model (Forecasting Of Road dust Emissions). These models describe particulate matter generated by the wear of road surface due to traction control methods and processes that control the suspension of road dust particles into the air. An experimental measurement campaign was conducted using a mobile laboratory called SNIFFER, along two selected road segments in central Helsinki in 2007 and 2008. The suspended PM10 concentration was measured behind the left rear tyre and the street background PM10 concentration in front of the van. Both models reproduced the measured seasonal variation of suspension emission factors fairly well during both years at both measurement sites. However, both models substantially under-predicted the measured emission values. The article illustrates the challenges in conducting road suspension measurements in densely trafficked urban conditions, and the numerous requirements for input data that are needed for accurately applying road suspension emission models.

  2. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  3. Comparing predicted estrogen concentrations with measurements in US waters

    International Nuclear Information System (INIS)

    Kostich, Mitch; Flick, Robert; Martinson, John

    2013-01-01

    The range of exposure rates to the steroidal estrogens estrone (E1), beta-estradiol (E2), estriol (E3), and ethinyl estradiol (EE2) in the aquatic environment was investigated by modeling estrogen introduction via municipal wastewater from sewage plants across the US. Model predictions were compared to published measured concentrations. Predictions were congruent with most of the measurements, but a few measurements of E2 and EE2 exceed those that would be expected from the model, despite very conservative model assumptions of no degradation or in-stream dilution. Although some extreme measurements for EE2 may reflect analytical artifacts, remaining data suggest concentrations of E2 and EE2 may reach twice the 99th percentile predicted from the model. The model and bulk of the measurement data both suggest that cumulative exposure rates to humans are consistently low relative to effect levels, but also suggest that fish exposures to E1, E2, and EE2 sometimes substantially exceed chronic no-effect levels. -- Highlights: •Conservatively modeled steroidal estrogen concentrations in ambient water. •Found reasonable agreement between model and published measurements. •Model and measurements agree that risks to humans are remote. •Model and measurements agree significant questions remain about risk to fish. •Need better understanding of temporal variations and their impact on fish. -- Our model and published measurements for estrogens suggest aquatic exposure rates for humans are below potential effect levels, but fish exposure sometimes exceeds published no-effect levels

  4. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  5. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  6. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  7. Comparative Study of foF2 Measurements with IRI-2007 Model Predictions During Extended Solar Minimum

    Science.gov (United States)

    Zakharenkova, I. E.; Krankowski, A.; Bilitza, D.; Cherniak, Iu.V.; Shagimuratov, I.I.; Sieradzki, R.

    2013-01-01

    The unusually deep and extended solar minimum of cycle 2324 made it very difficult to predict the solar indices 1 or 2 years into the future. Most of the predictions were proven wrong by the actual observed indices. IRI gets its solar, magnetic, and ionospheric indices from an indices file that is updated twice a year. In recent years, due to the unusual solar minimum, predictions had to be corrected downward with every new indices update. In this paper we analyse how much the uncertainties in the predictability of solar activity indices affect the IRI outcome and how the IRI values calculated with predicted and observed indices compared to the actual measurements.Monthly median values of F2 layer critical frequency (foF2) derived from the ionosonde measurements at the mid-latitude ionospheric station Juliusruh were compared with the International Reference Ionosphere (IRI-2007) model predictions. The analysis found that IRIprovides reliable results that compare well with actual measurements, when the definite (observed and adjusted) indices of solar activityare used, while IRI values based on earlier predictions of these indices noticeably overestimated the measurements during the solar minimum.One of the principal objectives of this paper is to direct attention of IRI users to update their solar activity indices files regularly.Use of an older index file can lead to serious IRI overestimations of F-region electron density during the recent extended solar minimum.

  8. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  9. In reactor measurements, modeling and assessments to predict liquid injection shutdown system nozzle to Calandria tube time to contact

    International Nuclear Information System (INIS)

    Kirstein, K.; Kalenchuk, D.

    2011-01-01

    Over the past few years there has been an expanding effort to assess the potential for Calandria Tubes (CTs) coming into contact with Liquid Injection Shutdown System (LISS) Nozzles to ensure continued contact-free operation as required by CSA N285.4. LISS Nozzles (LINs), which run perpendicular to and between rows of fuel channels, sag at a slower rate than the fuel channels. As a result certain LINs may come in contact with CTs above them. The CT/LIN gaps can be predicted from calculated CT sag, LIN sag and a number of component and installation tolerances. This method however results in very conservative predictions when compared to measurements, confirmed with the in reactor measurements initiated in 2000, when gaps were successfully measured the first time using images obtained from a camera-assisted measurement tool inserted into the calandria. To reduce the conservatism of the CT/LIN gap predictions, statistical CT/LIN gap models are used instead. They are derived from a comparison between calculated gaps based on nominal dimensions and the visual image based measured gaps. These reactor specific (typically 95% confidence level) CT/LIN gap models account for all uncertainties and deviations from nominal values. Prediction error margins reduce as more in-reactor gap measurements become available. Each year more measurements are being made using this standardized visual CT/LIN proximity method. The subsequently prepared reactor-specific models have been used to provide time to contact for every channel above the LINs at these stations. In a number of cases it has been used to demonstrate that the reactor can be operated to its end of life before refurbishment with no predicted contact, or specific at-risk channels have been identified for which appropriate remedial actions could be implemented in a planned manner. (author)

  10. Combining Satellite Measurements and Numerical Flood Prediction Models to Save Lives and Property from Flooding

    Science.gov (United States)

    Saleh, F.; Garambois, P. A.; Biancamaria, S.

    2017-12-01

    Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.

  11. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    Science.gov (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  12. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    Science.gov (United States)

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  13. PIV-measured versus CFD-predicted flow dynamics in anatomically realistic cerebral aneurysm models.

    Science.gov (United States)

    Ford, Matthew D; Nikolov, Hristo N; Milner, Jaques S; Lownie, Stephen P; Demont, Edwin M; Kalata, Wojciech; Loth, Francis; Holdsworth, David W; Steinman, David A

    2008-04-01

    Computational fluid dynamics (CFD) modeling of nominally patient-specific cerebral aneurysms is increasingly being used as a research tool to further understand the development, prognosis, and treatment of brain aneurysms. We have previously developed virtual angiography to indirectly validate CFD-predicted gross flow dynamics against the routinely acquired digital subtraction angiograms. Toward a more direct validation, here we compare detailed, CFD-predicted velocity fields against those measured using particle imaging velocimetry (PIV). Two anatomically realistic flow-through phantoms, one a giant internal carotid artery (ICA) aneurysm and the other a basilar artery (BA) tip aneurysm, were constructed of a clear silicone elastomer. The phantoms were placed within a computer-controlled flow loop, programed with representative flow rate waveforms. PIV images were collected on several anterior-posterior (AP) and lateral (LAT) planes. CFD simulations were then carried out using a well-validated, in-house solver, based on micro-CT reconstructions of the geometries of the flow-through phantoms and inlet/outlet boundary conditions derived from flow rates measured during the PIV experiments. PIV and CFD results from the central AP plane of the ICA aneurysm showed a large stable vortex throughout the cardiac cycle. Complex vortex dynamics, captured by PIV and CFD, persisted throughout the cardiac cycle on the central LAT plane. Velocity vector fields showed good overall agreement. For the BA, aneurysm agreement was more compelling, with both PIV and CFD similarly resolving the dynamics of counter-rotating vortices on both AP and LAT planes. Despite the imposition of periodic flow boundary conditions for the CFD simulations, cycle-to-cycle fluctuations were evident in the BA aneurysm simulations, which agreed well, in terms of both amplitudes and spatial distributions, with cycle-to-cycle fluctuations measured by PIV in the same geometry. The overall good agreement

  14. An instantaneous spatiotemporal model to predict a bicyclist's Black Carbon exposure based on mobile noise measurements

    Science.gov (United States)

    Dekoninck, Luc; Botteldooren, Dick; Int Panis, Luc

    2013-11-01

    Several studies have shown that a significant amount of daily air pollution exposure, in particular Black Carbon (BC), is inhaled during trips. Assessing this contribution to exposure remains difficult because on the one hand local air pollution maps lack spatio-temporal resolution, at the other hand direct measurement of particulate matter concentration remains expensive. This paper proposes to use in-traffic noise measurements in combination with geographical and meteorological information for predicting BC exposure during commuting trips. Mobile noise measurements are cheaper and easier to perform than mobile air pollution measurements and can easily be used in participatory sensing campaigns. The uniqueness of the proposed model lies in the choice of noise indicators that goes beyond the traditional overall A-weighted noise level used in previous work. Noise and BC exposures are both related to the traffic intensity but also to traffic speed and traffic dynamics. Inspired by theoretical knowledge on the emission of noise and BC, the low frequency engine related noise and the difference between high frequency and low frequency noise that indicates the traffic speed, are introduced in the model. In addition, it is shown that splitting BC in a local and a background component significantly improves the model. The coefficients of the proposed model are extracted from 200 commuter bicycle trips. The predicted average exposure over a single trip correlates with measurements with a Pearson coefficient of 0.78 using only four parameters: the low frequency noise level, wind speed, the difference between high and low frequency noise and a street canyon index expressing local air pollution dispersion properties.

  15. Measurement and modelling of noise emission of road vehicles for use in prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Jonasson, H.G.

    2000-07-01

    The road vehicle as sound source has been studied within a wide frequency range. Well defined measurements have been carried out on moving and stationary vehicles. Measurement results have been checked against theoretical simulations. A Nordtest measurement method to obtain input data for prediction methods has been proposed and tested in four different countries. The effective sound source of a car has its centre close to the nearest wheels. For trucks this centre seems to be closer to the centre of the car. The vehicle as sound source is directional both in the vertical and the horizontal plane. The difference between SEL and L{sub pFmax} during a pass-by varies with frequency. At low frequencies interference effects between correlated sources may be the problem. At high frequencies the directivity of tyre/road noise affects the result. The time when L{sub pFmax} is obtained varies with frequency. Thus traditional maximum measurements are not suitable for frequency band applications. The measurements support the fact that the tyre/road noise source is very low. Measurements on a stationary vehicle indicate that the engine source is also very low. Engine noise is screened by the body of the car. The ground attenuation, also at short distances, will be significant whenever we use low microphone positions and have some 'soft' ground in between. Unless all measurements are restricted to propagation over 'hard' surfaces only it is necessary to use rather high microphone positions. The Nordtest method proposed will yield a reproducibility standard deviation of 1-3 dB depending on frequency. High frequencies are more accurate. In order to get accurate results at low frequencies large numbers of vehicles are required. To determine the sound power level from pass-by measurement requires a proper source and propagation model. As these models may change it is recommended to measure and report both SEL and L{sub pFmax} normalized to a specified distance.

  16. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  17. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  18. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  19. Comparisons Between Model Predictions and Spectral Measurements of Charged and Neutral Particles on the Martian Surface

    Science.gov (United States)

    Kim, Myung-Hee Y.; Cucinotta, Francis A.; Zeitlin, Cary; Hassler, Donald M.; Ehresmann, Bent; Rafkin, Scot C. R.; Wimmer-Schweingruber, Robert F.; Boettcher, Stephan; Boehm, Eckart; Guo, Jingnan; hide

    2014-01-01

    Detailed measurements of the energetic particle radiation environment on the surface of Mars have been made by the Radiation Assessment Detector (RAD) on the Curiosity rover since August 2012. RAD is a particle detector that measures the energy spectrum of charged particles (10 to approx. 200 MeV/u) and high energy neutrons (approx 8 to 200 MeV). The data obtained on the surface of Mars for 300 sols are compared to the simulation results using the Badhwar-O'Neill galactic cosmic ray (GCR) environment model and the high-charge and energy transport (HZETRN) code. For the nuclear interactions of primary GCR through Mars atmosphere and Curiosity rover, the quantum multiple scattering theory of nuclear fragmentation (QMSFRG) is used. For describing the daily column depth of atmosphere, daily atmospheric pressure measurements at Gale Crater by the MSL Rover Environmental Monitoring Station (REMS) are implemented into transport calculations. Particle flux at RAD after traversing varying depths of atmosphere depends on the slant angles, and the model accounts for shielding of the RAD "E" dosimetry detector by the rest of the instrument. Detailed comparisons between model predictions and spectral data of various particle types provide the validation of radiation transport models, and suggest that future radiation environments on Mars can be predicted accurately. These contributions lend support to the understanding of radiation health risks to astronauts for the planning of various mission scenarios

  20. A prediction model for assessing residential radon concentration in Switzerland

    International Nuclear Information System (INIS)

    Hauri, Dimitri D.; Huss, Anke; Zimmermann, Frank; Kuehni, Claudia E.; Röösli, Martin

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th–90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40–111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69–215 Bq/m³) in the medium category, and 219 Bq/m³ (108–427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be

  1. Construction of Models for Nondestructive Prediction of Ingredient Contents in Blueberries by Near-infrared Spectroscopy Based on HPLC Measurements.

    Science.gov (United States)

    Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao

    2016-06-28

    Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.

  2. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  3. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  4. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  5. Comparison of secondhand smoke exposure measures during pregnancy in the development of a clinical prediction model for small-for-gestational-age among non-smoking Chinese pregnant women.

    Science.gov (United States)

    Xie, Chuanbo; Wen, Xiaozhong; Niu, Zhongzheng; Ding, Peng; Liu, Tao; He, Yanhui; Lin, Jianmiao; Yuan, Shixin; Guo, Xiaoling; Jia, Deqin; Chen, Weiqing

    2015-10-01

    To compare predictive values of small-for-gestational-age (SGA) by different measures for secondhand smoke (SHS) exposure during pregnancy and to develop and validate a prediction model for SGA using SHS exposure along with sociodemographic and pregnancy factors. We compared the predictability of different measures of SHS exposure during pregnancy for SGA among 545 Chinese pregnant women, and then used the optimal SHS measure along with other clinically available factors to develop and validate a prediction model for SGA. We fit logistic regression models to predict SGA by single measures of SHS exposure (self-report, serum cotinine and CYP2A6*4) and different combinations (self-report+cotinine, cotinine+CYP2A6*4, self-report+CYP2A6*4 and self-report+cotinine+CYP2A6*4). We found that self-reported SHS exposure alone predicted SGA (area under the receiver operating characteristic curve or area under the receiver operating curve (AUROC), 0.578) better than the other two single measures (cotinine, 0.547; CYP2A6*4, 0.529) or as accurately as combined SHS measures (0.545-0.584). The final prediction model that contained self-reported SHS exposure, prepregnancy body mass index, gestational weight gain velocity during the second and third trimesters, gestational diabetes, gestational hypertension and the third-trimester biparietal diameter Z-score could predict SGA fairly accurately (AUROC, 0.698). Self-reported SHS exposure at peribirth performs better in predicting SGA than a single measure of serum cotinine at the same time, although repeated biochemical cotinine assessments throughout pregnancy may be optimal. Our simple prediction model is fairly accurate and can be potentially used in routine prenatal care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Comparison of predicted and measured variations of indoor radon concentration

    International Nuclear Information System (INIS)

    Arvela, H.; Voutilainen, A.; Maekelaeinen, I.; Castren, O.; Winqvist, K.

    1988-01-01

    Prediction of the variations of indoor radon concentration were calculated using a model relating indoor radon concentration to radon entry rate, air infiltration and meteorological factors. These calculated variations have been compared with seasonal variations of 33 houses during 1-4 years, with winter-summer concentration ratios of 300 houses and the measured diurnal variation. In houses with a slab in ground contact the measured seasonal variations are quite often in agreement with variations predicted for nearly pure pressure difference driven flow. The contribution of a diffusion source is significant in houses with large porous concrete walls against the ground. Air flow due to seasonally variable thermal convection within eskers strongly affects the seasonal variations within houses located thereon. Measured and predicted winter-summer concentration ratios demonstrate that, on average, the ratio is a function of radon concentration. The ratio increases with increasing winter concentration. According to the model the diurnal maximum caused by a pressure difference driven flow occurs in the morning, a finding which is in agreement with the measurements. The model presented can be used for differentiating between factors affecting radon entry into houses. (author)

  7. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  8. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  9. The prediction of BRDFs from surface profile measurements

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.; Leonard, T.A.

    1989-01-01

    This paper discusses methods of predicting the BRDF of smooth surfaces from profile measurements of their surface finish. The conversion of optical profile data to the BRDF at the same wavelength is essentially independent of scattering models, while the conversion of mechanical measurements, and wavelength scaling in general, are model dependent. Procedures are illustrated for several surfaces, including two from the recent HeNe BRDF round robin, and results are compared with measured data. Reasonable agreement is found except for surfaces which involve significant scattering from isolated surface defects which are poorly sampled in the profile data

  10. Data Quality Enhanced Prediction Model for Massive Plant Data

    International Nuclear Information System (INIS)

    Park, Moon-Ghu; Kang, Seong-Ki; Shin, Hajin

    2016-01-01

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function

  11. Data Quality Enhanced Prediction Model for Massive Plant Data

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon-Ghu [Nuclear Engr. Sejong Univ., Seoul (Korea, Republic of); Kang, Seong-Ki [Monitoring and Diagnosis, Suwon (Korea, Republic of); Shin, Hajin [Saint Paul Preparatory Seoul, Seoul (Korea, Republic of)

    2016-10-15

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function.

  12. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  14. Effect of length of measurement period on accuracy of predicted annual heating energy consumption of buildings

    International Nuclear Information System (INIS)

    Cho, Sung-Hwan; Kim, Won-Tae; Tae, Choon-Soeb; Zaheeruddin, M.

    2004-01-01

    This study examined the temperature dependent regression models of energy consumption as a function of the length of the measurement period. The methodology applied was to construct linear regression models of daily energy consumption from 1 day to 3 months data sets and compare the annual heating energy consumption predicted by these models with actual annual heating energy consumption. A commercial building in Daejon was selected, and the energy consumption was measured over a heating season. The results from the investigation show that the predicted energy consumption based on 1 day of measurements to build the regression model could lead to errors of 100% or more. The prediction error decreased to 30% when 1 week of data was used to build the regression model. Likewise, the regression model based on 3 months of measured data predicted the annual energy consumption within 6% of the measured energy consumption. These analyses show that the length of the measurement period has a significant impact on the accuracy of the predicted annual energy consumption of buildings

  15. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  16. A comparison of predictions and measurements for the Stripa simulated drift experiment

    International Nuclear Information System (INIS)

    Hodgkinson, D.

    1991-02-01

    This paper presents a comparison of measurements and predictions for the simulated drift experiment based on groundwater flow to the D-holes at the SCV site. The comparison was carried out on behalf of the Stripa task force on fracture flow modelling, as a learning exercise for the validation exercise to be based on flow to the validation drift. The paper summarises the characterisation data and their preliminary interpretation, and reviews the fracture flow modelling predictions made by teams from AEA Harwell, Golder Associates and Lawrence Berkeley Laboratory. The predictions are compared with each other and with the D-hole inflow measurements, and this experience is used to provide detailed feedback to future experimental and modelling work. (35 refs.)

  17. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  18. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    Science.gov (United States)

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Comparison between predicted and measured south drift closures at the WIPP using a transient creep model for salt

    International Nuclear Information System (INIS)

    Munson, D.E.; Fossum, A.F.

    1986-01-01

    The US Department of Energy is constructing and operating the Waste Isolation Pilot Plant (WIPP), a research and development facility near Carlsbad, New Mexico, to determine whether or not defense-generated high-level radioactive waste can be stored safely in bedded salt. The goal of the WIPP modeling program is to develop the capability to predict room responses from one site to another without a priori knowledge of the actual room responses. Data from one of the early WIPP excavations, called the South Drift, have already been used to form an initial evaluation of computational models for predicting room closures as a result of salt creep. In that study, a significant unresolved discrepancy existed between predicted and measured room closures. It was suggested that future studies address alternate forms of the constitutive law. In this paper, an alternate form of the creep model for salt is used that is founded upon the deformation-mechanism map for the micromechanical deformation processes. This model embodies both steady-state and transient creep. Also, quasi-static plasticity is incorporated into the complete constitutive model for salt. The conclusion is drawn that the combination of the mechanistic creep model, plasticity, and flow potential can approximate the late time South Drift deformation. Further improvement of the model fit of plasticity in the future is expected to further improve the simulation

  20. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  1. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Measurements and predictions for nonevaporating sprays in a quiescent environment

    Science.gov (United States)

    Solomon, A. S. P.; Shuen, J.-S.; Faeth, G. M.; Zhang, Q.-F.

    1983-01-01

    Yule et al. (1982) have conducted a study of vaporizing sprays with the aid of laser techniques. The present investigation has the objective to supplement the measurements performed by Yule et al., by considering the limiting case of a spray in a stagnant environment. Mean and fluctuating velocities of the continuous phase are measured by means of laser Doppler anemometry (LDA) techniques, while Fraunhofer diffraction and slide impaction methods are employed to determine drop sizes. Liquid fluxes in the spray are found by making use of an isokinetic sampling probe. The obtained data are used as a basis for the evaluation of three models of the process, including a locally homogeneous flow (LHF) model, a deterministic separated flow (DSF) model, and a stochastic separated flow (SSF) model. It is found that the LHF and DSF models do not provide very satisfactory predictions for the test sprays, while the SSF model does provide reasonably good predictions of the observed structure.

  3. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  4. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    Science.gov (United States)

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  5. Database and prediction model for CANDU pressure tube diameter

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.Y.; Park, J.H. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2014-07-01

    The pressure tube (PT) diameter is basic data in evaluating the CCP (critical channel power) of a CANDU reactor. Since the CCP affects the operational margin directly, an accurate prediction of the PT diameter is important to assess the operational margin. However, the PT diameter increases by creep owing to the effects of irradiation by neutron flux, stress, and reactor operating temperatures during the plant service period. Thus, it has been necessary to collect the measured data of the PT diameter and establish a database (DB) and develop a prediction model of PT diameter. Accordingly, in this study, a DB for the measured PT diameter data was established and a neural network (NN) based diameter prediction model was developed. The established DB included not only the measured diameter data but also operating conditions such as the temperature, pressure, flux, and effective full power date. The currently developed NN based diameter prediction model considers only extrinsic variables such as the operating conditions, and will be enhanced to consider the effect of intrinsic variables such as the micro-structure of the PT material. (author)

  6. Comparison of measured and predicted long term performance of grid a connected photovoltaic system

    International Nuclear Information System (INIS)

    Mondol, Jayanta Deb; Yohanis, Yigzaw G.; Norton, Brian

    2007-01-01

    Predicted performance of a grid connected photovoltaic (PV) system using TRNSYS was compared with measured data. A site specific global-diffuse correlation model was developed and used to calculate the beam and diffuse components of global horizontal insolation. A PV module temperature equation and a correlation relating input and output power of an inverter were developed using measured data and used in TRNSYS to perform PV array and inverter outputs simulation. Different combinations of the tilted surface radiation model, global-diffuse correlation model and PV module temperature equation were used in the simulations. Statistical error analysis was performed to compare the results for each combination. The simulation accuracy was improved by using the new global-diffuse correlation and module temperature equation in the TRNSYS simulation. For an isotropic sky tilted surface radiation model, the average monthly difference between measured and predicted PV output before and after modification of the TRNSYS component were 10.2% and 3.3%, respectively, and, for an anisotropic sky model, 15.4% and 10.7%, respectively. For inverter output, the corresponding errors were 10.4% and 3.3% and 15.8% and 8.6%, respectively. Measured PV efficiency, overall system efficiency, inverter efficiency and performance ratio of the system were compared with the predicted results. The predicted PV performance parameters agreed more closely with the measured parameters in summer than in winter. The difference between predicted performances using an isotropic and an anisotropic sky tilted surface models is between 1% and 2%

  7. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  8. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    Science.gov (United States)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall

  9. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    Science.gov (United States)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that

  10. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  11. Application of prediction of equilibrium to servo-controlled calorimetry measurements

    International Nuclear Information System (INIS)

    Mayer, R.L. II

    1987-01-01

    Research was performed to develop an endpoint prediction algorithm for use with calorimeters operating in the digital servo-controlled mode. The purpose of this work was to reduce calorimetry measurement times while maintaining the high degree of precision and low bias expected from calorimetry measurements. Data from routine operation of two calorimeters were used to test predictive models at each stage of development against time savings, precision, and robustness criteria. The results of the study indicated that calorimetry measurement times can be significantly reduced using this technique. The time savings is, however, dependent on parameters in the digital servo-control algorithm and on packaging characteristics of measured items

  12. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty (discussion paper)

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Savenije, H.H.G.; Bastidas, L.A.

    2014-01-01

    This paper presents evidence that model prediction uncertainty does not necessarily rise with parameter dimensionality (the number of parameters). Here by prediction we mean future simulation of a variable of interest conditioned on certain future values of input variables. We utilize a relationship

  13. Approach to first principles model prediction of measured WIPP [Waste Isolation Pilot Plant] in situ room closure in salt

    International Nuclear Information System (INIS)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E.

    1989-01-01

    The discrepancies between predicted and measured WIPP in situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. 17 refs., 8 figs., 1 tab

  14. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  16. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty (discussion paper)

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Savenije, H.H.G.; Bastidas, L.A.

    2015-01-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is

  17. Prediction of Landing Gear Noise Reduction and Comparison to Measurements

    Science.gov (United States)

    Lopes, Leonard V.

    2010-01-01

    Noise continues to be an ongoing problem for existing aircraft in flight and is projected to be a concern for next generation designs. During landing, when the engines are operating at reduced power, the noise from the airframe, of which landing gear noise is an important part, is equal to the engine noise. There are several methods of predicting landing gear noise, but none have been applied to predict the change in noise due to a change in landing gear design. The current effort uses the Landing Gear Model and Acoustic Prediction (LGMAP) code, developed at The Pennsylvania State University to predict the noise from landing gear. These predictions include the influence of noise reduction concepts on the landing gear noise. LGMAP is compared to wind tunnel experiments of a 6.3%-scale Boeing 777 main gear performed in the Quiet Flow Facility (QFF) at NASA Langley. The geometries tested in the QFF include the landing gear with and without a toboggan fairing and the door. It is shown that LGMAP is able to predict the noise directives and spectra from the model-scale test for the baseline configuration as accurately as current gear prediction methods. However, LGMAP is also able to predict the difference in noise caused by the toboggan fairing and by removing the landing gear door. LGMAP is also compared to far-field ground-based flush-mounted microphone measurements from the 2005 Quiet Technology Demonstrator 2 (QTD 2) flight test. These comparisons include a Boeing 777-300ER with and without a toboggan fairing that demonstrate that LGMAP can be applied to full-scale flyover measurements. LGMAP predictions of the noise generated by the nose gear on the main gear measurements are also shown.

  18. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    International Nuclear Information System (INIS)

    Ould-Dada, Z.; Carini, F.; Eged, K.; Kis, Z.; Linkov, I.; Mitchell, N.G.; Mourlon, C.; Robles, B.; Sweeck, L.; Venter, A.

    2006-01-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of 134 Cs and 85 Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for 134 Cs, while differences for 85 Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise

  19. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  20. Predictive ability of boiler production models | Ogundu | Animal ...

    African Journals Online (AJOL)

    The weekly body weight measurements of a growing strain of Ross broiler were used to compare the of ability of three mathematical models (the multi, linear, quadratic and Exponential) to predict 8 week body weight from early body measurements at weeks I, II, III, IV, V, VI and VII. The results suggest that the three models ...

  1. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  2. Predictions and measurements of residual stress in repair welds in plates

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.B. [Mitsui Babcock Energy Limited, Technology and Engineering, Porterfield Road, Renfrew, PA4 8DJ, Scotland (United Kingdom)]. E-mail: bbrown@mitsuibabcock.com; Dauda, T.A. [Mitsui Babcock Energy Limited, Technology and Engineering, Porterfield Road, Renfrew, PA4 8DJ, Scotland (United Kingdom); Truman, C.E. [Department of Mechanical Engineering, University of Bristol, Bristol BS8 1TR, England (United Kingdom); Smith, D.J. [Department of Mechanical Engineering, University of Bristol, Bristol BS8 1TR (United Kingdom); Memhard, D. [Fraunhofer-Institut fuer Werkstoffmechanik, Freiburg (Germany); Pfeiffer, W. [Fraunhofer-Institut fuer Werkstoffmechanik, Freiburg (Germany)

    2006-11-15

    This paper presents the work, from the European Union FP-5 project ELIXIR, on a series of rectangular repair welds in P275 and S690 steels to validate the numerical modelling techniques used in the determination of the residual stresses generated during the repair process. The plates were 1,000 mm by 800 mm with thicknesses of 50 and 100 mm. The repair welds were 50%, 75% and 100% through the plate thickness. The repair welds were modelled using the finite element method to make predictions of the as-welded residual stress distributions. These predictions were compared with surface-strain measurements made on the parent plates during welding and found to be in good agreement. Through-thickness residual stress measurements were obtained from the test plates through, and local to, the weld repairs using the deep hole drilling technique. Comparisons between the measurements and the finite element predictions generally showed good agreement, thus providing confidence in the method.

  3. Predictions and measurements of residual stress in repair welds in plates

    International Nuclear Information System (INIS)

    Brown, T.B.; Dauda, T.A.; Truman, C.E.; Smith, D.J.; Memhard, D.; Pfeiffer, W.

    2006-01-01

    This paper presents the work, from the European Union FP-5 project ELIXIR, on a series of rectangular repair welds in P275 and S690 steels to validate the numerical modelling techniques used in the determination of the residual stresses generated during the repair process. The plates were 1,000 mm by 800 mm with thicknesses of 50 and 100 mm. The repair welds were 50%, 75% and 100% through the plate thickness. The repair welds were modelled using the finite element method to make predictions of the as-welded residual stress distributions. These predictions were compared with surface-strain measurements made on the parent plates during welding and found to be in good agreement. Through-thickness residual stress measurements were obtained from the test plates through, and local to, the weld repairs using the deep hole drilling technique. Comparisons between the measurements and the finite element predictions generally showed good agreement, thus providing confidence in the method

  4. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  5. Application of prediction of equilibrium to servo-controlled calorimetry measurements

    International Nuclear Information System (INIS)

    Mayer, R.L. II.

    1987-01-01

    Research was performed to develop an endpoint prediction algorithm for use with calorimeters operating in the digital servo-controlled mode. The purpose of this work was to reduce calorimetry measurement times while maintaining the high degree of precision and low bias expected from calorimetry measurements. Data from routine operation of two calorimeters were used to test predictive models at each stage of development against time savings, precision, and robustness criteria. The results of the study indicated that calorimetry measurement times can be significantly reduced using this technique. The time savings is, however, dependent on parameters in the digital servo-control algorithm and on packaging characteristics of measured items. 7 refs., 4 figs., 1 tab

  6. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  7. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  8. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  9. Utilizing the non-bridge oxygen model to predict the glass viscosity

    International Nuclear Information System (INIS)

    Choi, Kwansik; Sheng, Jiawei; Maeng, Sung Jun; Song, Myung Jae

    1998-01-01

    Viscosity is the most important process property of waste glass. Viscosity measurement is difficult and costs much. Non-bridging Oxygen (NBO) model which relates glass composition to viscosity had been developed for high level waste at the Savannah River Site (SRS). This research utilized this NBO model to predict the viscosity of KEPRI's 55 glasses. It was found that there was a linear relationship between the measured viscosity and the predicted viscosity. The NBO model could be used to predict glass viscosity in glass formulation development. However the precision of predicted viscosity is out of satisfaction because the composition ranges are very different between the SRS and KEPRI glasses. The modification of NBO calculation, which included modification of alkaline earth elements and TiO 2 , could not strikingly improve the precision of predicted values

  10. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  11. DISCERNING EXOPLANET MIGRATION MODELS USING SPIN-ORBIT MEASUREMENTS

    International Nuclear Information System (INIS)

    Morton, Timothy D.; Johnson, John Asher

    2011-01-01

    We investigate the current sample of exoplanet spin-orbit measurements to determine whether a dominant planet migration channel can be identified, and at what confidence. We use the predictions of Kozai migration plus tidal friction and planet-planet scattering as our misalignment models, and we allow for a fraction of intrinsically aligned systems, explainable by disk migration. Bayesian model comparison demonstrates that the current sample of 32 spin-orbit measurements strongly favors a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. Our analysis indicates that between 34% and 76% of close-in planets (95% confidence) migrated via planet-planet scattering. Separately analyzing the subsample of 12 stars with T eff >6250 K-which Winn et al. predict to be the only type of stars to maintain their primordial misalignments-we find that the data favor a single-mode scattering model over Kozai with 85% confidence. We also assess the number of additional hot star spin-orbit measurements that will likely be necessary to provide a more confident model selection, finding that an additional 20-30 measurement has a >50% chance of resulting in a 95% confident model selection, if the current model selection is correct. While we test only the predictions of particular Kozai and scattering migration models in this work, our methods may be used to test the predictions of any other spin-orbit misaligning mechanism.

  12. Solar energy prediction and verification using operational model forecasts and ground-based solar measurements

    International Nuclear Information System (INIS)

    Kosmopoulos, P.G.; Kazadzis, S.; Lagouvardos, K.; Kotroni, V.; Bais, A.

    2015-01-01

    The present study focuses on the predictions and verification of these predictions of solar energy using ground-based solar measurements from the Hellenic Network for Solar Energy and the National Observatory of Athens network, as well as solar radiation operational forecasts provided by the MM5 mesoscale model. The evaluation was carried out independently for the different networks, for two forecast horizons (1 and 2 days ahead), for the seasons of the year, for varying solar elevation, for the indicative energy potential of the area, and for four classes of cloud cover based on the calculated clearness index (k_t): CS (clear sky), SC (scattered clouds), BC (broken clouds) and OC (overcast). The seasonal dependence presented relative rRMSE (Root Mean Square Error) values ranging from 15% (summer) to 60% (winter), while the solar elevation dependence revealed a high effectiveness and reliability near local noon (rRMSE ∼30%). An increment of the errors with cloudiness was also observed. For CS with mean GHI (global horizontal irradiance) ∼ 650 W/m"2 the errors are 8%, for SC 20% and for BC and OC the errors were greater (>40%) but correspond to much lower radiation levels (<120 W/m"2) of consequently lower energy potential impact. The total energy potential for each ground station ranges from 1.5 to 1.9 MWh/m"2, while the mean monthly forecast error was found to be consistently below 10%. - Highlights: • Long term measurements at different atmospheric cases are needed for energy forecasting model evaluations. • The total energy potential at the Greek sites presented ranges from 1.5 to 1.9 MWh/m"2. • Mean monthly energy forecast errors are within 10% for all cases analyzed. • Cloud presence results of an additional forecast error that varies with the cloud cover.

  13. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  14. Measuring and Predicting Tag Importance for Image Retrieval.

    Science.gov (United States)

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  15. A heat transport benchmark problem for predicting the impact of measurements on experimental facility design

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2016-01-01

    Highlights: • Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) methodology is used. • Impact of measurements for reducing predicted uncertainties is highlighted. • Presented thermal-hydraulics benchmark illustrates generally applicable concepts. - Abstract: This work presents the application of the “Predictive Modeling of Coupled Multi-Physics Systems” (PM_CMPS) methodology conceived by Cacuci (2014) to a “test-section benchmark” problem in order to quantify the impact of measurements for reducing the uncertainties in the conceptual design of a proposed experimental facility aimed at investigating the thermal-hydraulics characteristics expected in the conceptual design of the G4M reactor (GEN4ENERGY, 2012). This “test-section benchmark” simulates the conditions experienced by the hottest rod within the conceptual design of the facility's test section, modeling the steady-state conduction in a rod heated internally by a cosinus-like heat source, as typically encountered in nuclear reactors, and cooled by forced convection to a surrounding coolant flowing along the rod. The PM_CMPS methodology constructs a prior distribution using all of the available computational and experimental information, by relying on the maximum entropy principle to maximize the impact of all available information and minimize the impact of ignorance. The PM_CMPS methodology then constructs the posterior distribution using Bayes’ theorem, and subsequently evaluates it via saddle-point methods to obtain explicit formulas for the predicted optimal temperature distributions and predicted optimal values for the thermal-hydraulics model parameters that characterized the test-section benchmark. In addition, the PM_CMPS methodology also yields reduced uncertainties for both the model parameters and responses. As a general rule, it is important to measure a quantity consistently with, and more accurately than, the information extant prior to the measurement. For

  16. Predicting birth weight with conditionally linear transformation models.

    Science.gov (United States)

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  17. Daily river flow prediction based on Two-Phase Constructive Fuzzy Systems Modeling: A case of hydrological - meteorological measurements asymmetry

    Science.gov (United States)

    Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann

    2018-03-01

    Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.

  18. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  19. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  20. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  1. Structure of nonevaporating sprays - Measurements and predictions

    Science.gov (United States)

    Solomon, A. S. P.; Shuen, J.-S.; Zhang, Q.-F.; Faeth, G. M.

    1984-01-01

    Structure measurements were completed within the dilute portion of axisymmetric nonevaporating sprays (SMD of 30 and 87 microns) injected into a still air environment, including: mean and fluctuating gas velocities and Reynolds stress using laser-Doppler anemometry; mean liquid fluxes using isokinetic sampling; drop sizes using slide impaction; and drop sizes and velocities using multiflash photography. The new measurements were used to evaluate three representative models of sprays: (1) a locally homogeneous flow (LHF) model, where slip between the phases was neglected; (2) a deterministic separated flow (DSF) model, where slip was considered but effects of drop interaction with turbulent fluctuations were ignored; and (3) a stochastic separated flow (SSF) model, where effects of both interphase slip and turbulent fluctuations were considered using random sampling for turbulence properties in conjunction with random-walk computations for drop motion. The LHF and DSF models were unsatisfactory for present test conditions-both underestimating flow widths and the rate of spread of drops. In contrast, the SSF model provided reasonably accurate predictions, including effects of enhanced spreading rates of sprays due to drop dispersion by turbulence, with all empirical parameters fixed from earlier work.

  2. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  3. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  4. Predicting responses from Rasch measures.

    Science.gov (United States)

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  5. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  6. Prediction of human core body temperature using non-invasive measurement methods.

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  7. Precision comparison of the erosion rates derived from 137Cs measurements models with predictions based on empirical relationship

    International Nuclear Information System (INIS)

    Yang Mingyi; Liu Puling; Li Liqing

    2004-01-01

    The soil samples were collected in 6 cultivated runoff plots with grid sampling method, and the soil erosion rates derived from 137 Cs measurements were calculated. The models precision of Zhang Xinbao, Zhou Weizhi, Yang Hao and Walling were compared with predictions based on empirical relationship, data showed that the precision of 4 models is high within 50m slope length except for the slope with low slope angle and short length. Relatively, the precision of Walling's model is better than that of Zhang Xinbao, Zhou Weizhi and Yang Hao. In addition, the relationship between parameter Γ in Walling's improved model and slope angle was analyzed, the ralation is: Y=0.0109 X 1.0072 . (authors)

  8. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  9. Measurement and prediction of sensitization development in austenitic stainless steels

    International Nuclear Information System (INIS)

    Bruemmer, S.M.; Charlot, L.A.; Atteridge, D.G.

    1985-10-01

    The effects of thermal and thermomechanical treatments on sensitization development in Type 304 and 316 stainless steels have been measured and compared to model predictions. Sensitization development resulting from isothermal, continuous cooling and pipe welding treatments has been evaluated. An empirically-modified, theoretically-based model is shown to accurately predict material degree of sensitization (DOS) as expressed by the electrochemical potentiokinetic reactivation (EPR) test after both simple and complex treatments. Material DOS is also examined using analytical electron microscopy to document grain boundary chromium depletion and is compared to EPR test results. 9 refs., 13 figs

  10. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  11. Clinical Decision Support Model to Predict Occlusal Force in Bruxism Patients.

    Science.gov (United States)

    Thanathornwong, Bhornsawan; Suebnukarn, Siriwan

    2017-10-01

    The aim of this study was to develop a decision support model for the prediction of occlusal force from the size and color of articulating paper markings in bruxism patients. We used the information from the datasets of 30 bruxism patients in which digital measurements of the size and color of articulating paper markings (12-µm Hanel; Coltene/Whaledent GmbH, Langenau, Germany) on canine protected hard stabilization splints were measured in pixels (P) and in red (R), green (G), and blue (B) values using Adobe Photoshop software (Adobe Systems, San Jose, CA, USA). The occlusal force (F) was measured using T-Scan III (Tekscan Inc., South Boston, MA, USA). The multiple regression equation was applied to predict F from the P and RGB. Model evaluation was performed using the datasets from 10 new patients. The patient's occlusal force measured by T-Scan III was used as a 'gold standard' to compare with the occlusal force predicted by the multiple regression model. The results demonstrate that the correlation between the occlusal force and the pixels and RGB of the articulating paper markings was positive (F = 1.62×P + 0.07×R -0.08×G + 0.08×B + 4.74; R 2 = 0.34). There was a high degree of agreement between the occlusal force of the patient measured using T-Scan III and the occlusal force predicted by the model (kappa value = 0.82). The results obtained demonstrate that the multiple regression model can predict the occlusal force using the digital values for the size and color of the articulating paper markings in bruxism patients.

  12. Clinical Decision Support Model to Predict Occlusal Force in Bruxism Patients

    Science.gov (United States)

    Thanathornwong, Bhornsawan

    2017-01-01

    Objectives The aim of this study was to develop a decision support model for the prediction of occlusal force from the size and color of articulating paper markings in bruxism patients. Methods We used the information from the datasets of 30 bruxism patients in which digital measurements of the size and color of articulating paper markings (12-µm Hanel; Coltene/Whaledent GmbH, Langenau, Germany) on canine protected hard stabilization splints were measured in pixels (P) and in red (R), green (G), and blue (B) values using Adobe Photoshop software (Adobe Systems, San Jose, CA, USA). The occlusal force (F) was measured using T-Scan III (Tekscan Inc., South Boston, MA, USA). The multiple regression equation was applied to predict F from the P and RGB. Model evaluation was performed using the datasets from 10 new patients. The patient's occlusal force measured by T-Scan III was used as a ‘gold standard’ to compare with the occlusal force predicted by the multiple regression model. Results The results demonstrate that the correlation between the occlusal force and the pixels and RGB of the articulating paper markings was positive (F = 1.62×P + 0.07×R –0.08×G + 0.08×B + 4.74; R2 = 0.34). There was a high degree of agreement between the occlusal force of the patient measured using T-Scan III and the occlusal force predicted by the model (kappa value = 0.82). Conclusions The results obtained demonstrate that the multiple regression model can predict the occlusal force using the digital values for the size and color of the articulating paper markings in bruxism patients. PMID:29181234

  13. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  14. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  15. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  16. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  17. Use of statistical models based on radiographic measurements to predict oviposition date and clutch size in rock iguanas (Cyclura nubila)

    International Nuclear Information System (INIS)

    Alberts, A.C.

    1995-01-01

    The ability to noninvasively estimate clutch size and predict oviposition date in reptiles can be useful not only to veterinary clinicians but also to managers of captive collections and field researchers. Measurements of egg size and shape, as well as position of the clutch within the coelomic cavity, were taken from diagnostic radiographs of 20 female Cuban rock iguanas, Cyclura nubila, 81 to 18 days prior to laying. Combined with data on maternal body size, these variables were entered into multiple regression models to predict clutch size and timing of egg laying. The model for clutch size was accurate to 0.53 ± 0.08 eggs, while the model for oviposition date was accurate to 6.22 ± 0.81 days. Equations were generated that should be applicable to this and other large Cyclura species. © 1995 Wiley-Liss, Inc

  18. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  19. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  20. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  1. Joint hierarchical Gaussian process model with application to personalized prediction in medical monitoring.

    Science.gov (United States)

    Duan, Leo L; Wang, Xia; Clancy, John P; Szczesniak, Rhonda D

    2018-01-01

    A two-level Gaussian process (GP) joint model is proposed to improve personalized prediction of medical monitoring data. The proposed model is applied to jointly analyze multiple longitudinal biomedical outcomes, including continuous measurements and binary outcomes, to achieve better prediction in disease progression. At the population level of the hierarchy, two independent GPs are used to capture the nonlinear trends in both the continuous biomedical marker and the binary outcome, respectively; at the individual level, a third GP, which is shared by the longitudinal measurement model and the longitudinal binary model, induces the correlation between these two model components and strengthens information borrowing across individuals. The proposed model is particularly advantageous in personalized prediction. It is applied to the motivating clinical data on cystic fibrosis disease progression, for which lung function measurements and onset of acute respiratory events are monitored jointly throughout each patient's clinical course. The results from both the simulation studies and the cystic fibrosis data application suggest that the inclusion of the shared individual-level GPs under the joint model framework leads to important improvements in personalized disease progression prediction.

  2. Estimating Time-Varying PCB Exposures Using Person-Specific Predictions to Supplement Measured Values: A Comparison of Observed and Predicted Values in Two Cohorts of Norwegian Women

    Science.gov (United States)

    Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning

    2015-01-01

    Background Studies on the health effects of polychlorinated biphenyls (PCBs) call for an understanding of past and present human exposure. Time-resolved mechanistic models may supplement information on concentrations in individuals obtained from measurements and/or statistical approaches if they can be shown to reproduce empirical data. Objectives Here, we evaluated the capability of one such mechanistic model to reproduce measured PCB concentrations in individual Norwegian women. We also assessed individual life-course concentrations. Methods Concentrations of four PCB congeners in pregnant (n = 310, sampled in 2007–2009) and postmenopausal (n = 244, 2005) women were compared with person-specific predictions obtained using CoZMoMAN, an emission-based environmental fate and human food-chain bioaccumulation model. Person-specific predictions were also made using statistical regression models including dietary and lifestyle variables and concentrations. Results CoZMoMAN accurately reproduced medians and ranges of measured concentrations in the two study groups. Furthermore, rank correlations between measurements and predictions from both CoZMoMAN and regression analyses were strong (Spearman’s r > 0.67). Precision in quartile assignments from predictions was strong overall as evaluated by weighted Cohen’s kappa (> 0.6). Simulations indicated large inter-individual differences in concentrations experienced in the past. Conclusions The mechanistic model reproduced all measurements of PCB concentrations within a factor of 10, and subject ranking and quartile assignments were overall largely consistent, although they were weak within each study group. Contamination histories for individuals predicted by CoZMoMAN revealed variation between study subjects, particularly in the timing of peak concentrations. Mechanistic models can provide individual PCB exposure metrics that could serve as valuable supplements to measurements. Citation Nøst TH, Breivik K, Wania F

  3. Pesticide volatilization from soil and plant surfaces: Measurements at different scales versus model predictions

    Energy Technology Data Exchange (ETDEWEB)

    Wolters, A.

    2003-07-01

    Simulation of pesticide volatilization from plant and soil surfaces as an integral component of pesticide fate models is of utmost importance, especially as part of the PEC (predicted environmental concentrations) models used in the registration procedures for pesticides. Experimentally determined volatilization rates at different scales were compared to model predictions to improve recent approaches included in European registration models. To assess the influence of crucial factors affecting volatilization under well-defined conditions, a laboratory chamber was set-up and validated. Aerodynamic conditions were adjusted to fulfill the requirements of the German guideline on assessing pesticide volatilization for registration purposes. At the semi-field scale, volatilization rates were determined in a wind-tunnel study after soil surface application of pesticides to gleyic cambisol. The following descending order of cumulative volatilization was observed: chlorpyrifos > parathion-methyl > terbuthylazine > fenpropimorph. Parameterization of the models PEARL (pesticide emission assessment at regional and local scales) and PELMO (pesticide leaching model) was performed to mirror the experimental boundary conditions. (orig.)

  4. Predicted and actual indoor environmental quality: Verification of occupants' behaviour models in residential buildings

    DEFF Research Database (Denmark)

    Andersen, Rune Korsholm; Fabi, Valentina; Corgnati, Stefano P.

    2016-01-01

    with the building controls (windows, thermostats, solar shading etc.). During the last decade, studies about stochastic models of occupants' behaviour in relation to control of the indoor environment have been published. Often the overall aim of these models is to enable more reliable predictions of building...... performance using building energy performance simulations (BEPS). However, the validity of these models has only been sparsely tested. In this paper, stochastic models of occupants' behaviour from literature were tested against measurements in five apartments. In a monitoring campaign, measurements of indoor....... However, comparisons of the average stochastic predictions with the measured temperatures, relative humidity and CO2 concentrations revealed that the models did not predict the actual indoor environmental conditions well....

  5. Measured and predicted electron density at 600 km over Tucuman and Huancayo

    International Nuclear Information System (INIS)

    Ezquer, R.G.; Cabrera, M.A.; Araoz, L.; Mosert, M.; Radicella, S.M.

    2002-01-01

    The electron density at 600 Km of altitude (N 600 ) predicted by IRI are compared with the measurements for a given particular time and place (not average) obtained with the Japanese Hinotori satellite. The results showed that the best agreement among predictions and measurements were obtained near the magnetic equator. Disagreements about 50% were observed near the southern peak of the equatorial anomaly (EA), when the model uses the CCIR and URSI options to obtain the peak characteristics. (author)

  6. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  7. Modeling the Ionosphere with GPS and Rotation Measure Observations

    Science.gov (United States)

    Malins, J. B.; Taylor, G. B.; White, S. M.; Dowell, J.

    2017-12-01

    Advances in digital processing have created new tools for looking at and examining the ionosphere. We have combined data from dual frequency GPSs, digital ionosondes and observations from The Long Wavelength Array (LWA), a 256 dipole low frequency radio telescope situated in central New Mexico in order to examine ionospheric profiles. By studying polarized pulsars, the LWA is able to very accurately determine the Faraday rotation caused by the ionosphere. By combining this data with the international geomagnetic reference field, the LWA can evaluate ionospheric profiles and how well they predict the actual Faraday rotation. Dual frequency GPS measurements of total electron content, as well as measurements from digisonde data were used to model the ionosphere, and to predict the Faraday rotation to with in 0.1 rad/m2. Additionally, it was discovered that the predicted topside profile of the digisonde data did not accurate predict faraday rotation measurements, suggesting a need to reexamine the methods for creating the topside predicted profile. I will discuss the methods used to measure rotation measure and ionosphere profiles as well as discuss possible corrections to the topside model.

  8. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  9. Modeling a gamma spectroscopy system and predicting spectra with Geant-4

    International Nuclear Information System (INIS)

    Sahin, D.; Uenlue, K.

    2009-01-01

    An activity predictor software was previously developed to foresee activities, exposure rates and gamma spectra of activated samples for Radiation Science and Engineering Center (RSEC), Penn State Breazeale Reactor (PSBR), Neutron Activation Analysis (NAA) measurements. With Activity Predictor it has been demonstrated that the predicted spectra were less than satisfactory. In order to obtain better predicted spectra, a new detailed model for the RSEC NAA spectroscopy system with High Purity Germanium (HPGe) detector is developed using Geant-4. The model was validated with a National Bureau of Standards certified 60 Co source and tree activated high purity samples at PSBR. The predicted spectra agreed well with measured spectra. Error in net photo peak area values were 8.6-33.6%. Along with the previously developed activity predictor software, this new model in Geant-4 provided realistic spectra prediction for NAA experiments at RSEC PSBR. (author)

  10. Numerical predictions of particle dispersed two-phase flows, using the LSD and SSF models

    International Nuclear Information System (INIS)

    Avila, R.; Cervantes de Gortari, J.; Universidad Nacional Autonoma de Mexico, Mexico City. Facultad de Ingenieria)

    1988-01-01

    A modified version of a numerical scheme which is suitable to predict parabolic dispersed two-phase flow, is presented. The original version of this scheme was used to predict the test cases discussed during the 3rd workshop on TPF predictions in Belgrade, 1986. In this paper, two particle dispersion models are included which use the Lagrangian approach predicting test case 1 and 3 of the 4th workshop. For the prediction of test case 1 the Lagrangian Stochastic Deterministic model (LSD) is used providing acceptable good results of mean and turbulent quantities for both solid and gas phases; however, the computed void fraction distribution is not in agreement with the measurements at locations away from the inlet, especially near the walls. Test case 3 is predicted using both the LSD and the Stochastic Separated Flow (SSF) models. It was found that the effects of turbulence modulation are large when the LSD model is used, whereas the particles have a negligible influence on the continuous phase if the SSF model is utilized for the computations. Predictions of gas phase properties based on both models agree well with measurements; however, the agreement between calculated and measured solid phase properties is less satisfactory. (orig.)

  11. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Landberg, L [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  12. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  13. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  14. Electrochemical measurements and modeling predictions in boiling water reactors under various operating conditions

    International Nuclear Information System (INIS)

    Indig, M.E.

    1991-01-01

    One important issue for providing life extension to operating boiling water nuclear reactors (BWRs) is the control of stress corrosion cracking in all sections of the primary coolant circuit. This paper links experimental and theoretical methods that provide understanding and measurements of the critical parameter, the electrochemical potential (ECP), and its application to determining crack growth rate among and within the family of BWRs. Measurement of in-core ECP required the development of a new family of radiation-resistant sensors. With these sensors, ECPs were measured in the core and piping of two operating BWRs. Concurrent crack growth measurements were used to benchmark a crack growth prediction algorithm with measured ECPs

  15. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  16. Prediction and measurement of thermally induced cambial tissue necrosis in tree stems

    Science.gov (United States)

    Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova

    2006-01-01

    A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...

  17. Ground-truthing predicted indoor radon concentrations by using soil-gas radon measurements

    International Nuclear Information System (INIS)

    Reimer, G.M.

    2001-01-01

    Predicting indoor radon potential has gained in importance even as the national radon programs began to wane. A cooperative study to produce radon potential maps was conducted by the Environmental Protection Agency (EPA), U.S. Geological Survey (USGS), Department of Energy (DOE), and Lawrence Berkeley Laboratory (LBL) with the latter taking the lead role. A county-wide predictive model based dominantly on the National Uranium Resource Evaluation (NURE) aerorad data and secondly on geology, both small-scale data bases was developed. However, that model breaks down in counties of complex geology and does not provide a means to evaluate the potential of an individual home or building site. Soil-gas radon measurements on a large scale are currently shown to provide information for estimating radon potential at individual sites sort out the complex geology so that the small-scale prediction index can be validated. An example from Frederick County, Maryland indicates a positive correlation between indoor measurements and soil-gas data. The method does not rely on a single measurement, but a series that incorporate seasonal and meteorological considerations. (author)

  18. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  19. An adaptive distance measure for use with nonparametric models

    International Nuclear Information System (INIS)

    Garvey, D. R.; Hines, J. W.

    2006-01-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  20. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  1. Estimating Time-Varying PCB Exposures Using Person-Specific Predictions to Supplement Measured Values: A Comparison of Observed and Predicted Values in Two Cohorts of Norwegian Women.

    Science.gov (United States)

    Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning

    2016-03-01

    Studies on the health effects of polychlorinated biphenyls (PCBs) call for an understanding of past and present human exposure. Time-resolved mechanistic models may supplement information on concentrations in individuals obtained from measurements and/or statistical approaches if they can be shown to reproduce empirical data. Here, we evaluated the capability of one such mechanistic model to reproduce measured PCB concentrations in individual Norwegian women. We also assessed individual life-course concentrations. Concentrations of four PCB congeners in pregnant (n = 310, sampled in 2007-2009) and postmenopausal (n = 244, 2005) women were compared with person-specific predictions obtained using CoZMoMAN, an emission-based environmental fate and human food-chain bioaccumulation model. Person-specific predictions were also made using statistical regression models including dietary and lifestyle variables and concentrations. CoZMoMAN accurately reproduced medians and ranges of measured concentrations in the two study groups. Furthermore, rank correlations between measurements and predictions from both CoZMoMAN and regression analyses were strong (Spearman's r > 0.67). Precision in quartile assignments from predictions was strong overall as evaluated by weighted Cohen's kappa (> 0.6). Simulations indicated large inter-individual differences in concentrations experienced in the past. The mechanistic model reproduced all measurements of PCB concentrations within a factor of 10, and subject ranking and quartile assignments were overall largely consistent, although they were weak within each study group. Contamination histories for individuals predicted by CoZMoMAN revealed variation between study subjects, particularly in the timing of peak concentrations. Mechanistic models can provide individual PCB exposure metrics that could serve as valuable supplements to measurements.

  2. Measurement and prediction of voice support and room gain

    DEFF Research Database (Denmark)

    Pelegrin Garcia, David; Brunskog, Jonas; Lyberg-Åhlander, Viveka

    2012-01-01

    and good acoustical quality lies in the range between 14 and 9 dB, whereas the room gain is in the range between 0.2 and 0.5 dB. The prediction model for voice support describes the measurements in the classrooms with a coefficient of determination of 0.84 and a standard deviation of 1.2 dB....

  3. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  4. A study of quality measures for protein threading models

    Directory of Open Access Journals (Sweden)

    Rychlewski Leszek

    2001-08-01

    Full Text Available Abstract Background Prediction of protein structures is one of the fundamental challenges in biology today. To fully understand how well different prediction methods perform, it is necessary to use measures that evaluate their performance. Every two years, starting in 1994, the CASP (Critical Assessment of protein Structure Prediction process has been organized to evaluate the ability of different predictors to blindly predict the structure of proteins. To capture different features of the models, several measures have been developed during the CASP processes. However, these measures have not been examined in detail before. In an attempt to develop fully automatic measures that can be used in CASP, as well as in other type of benchmarking experiments, we have compared twenty-one measures. These measures include the measures used in CASP3 and CASP2 as well as have measures introduced later. We have studied their ability to distinguish between the better and worse models submitted to CASP3 and the correlation between them. Results Using a small set of 1340 models for 23 different targets we show that most methods correlate with each other. Most pairs of measures show a correlation coefficient of about 0.5. The correlation is slightly higher for measures of similar types. We found that a significant problem when developing automatic measures is how to deal with proteins of different length. Also the comparisons between different measures is complicated as many measures are dependent on the size of the target. We show that the manual assessment can be reproduced to about 70% using automatic measures. Alignment independent measures, detects slightly more of the models with the correct fold, while alignment dependent measures agree better when selecting the best models for each target. Finally we show that using automatic measures would, to a large extent, reproduce the assessors ranking of the predictors at CASP3. Conclusions We show that given a

  5. Modelling and prediction of non-stationary optical turbulence behaviour

    NARCIS (Netherlands)

    Doelman, N.J.; Osborn, J.

    2016-01-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument

  6. Wind Speed Prediction Using a Univariate ARIMA Model and a Multivariate NARX Model

    Directory of Open Access Journals (Sweden)

    Erasmo Cadenas

    2016-02-01

    Full Text Available Two on step ahead wind speed forecasting models were compared. A univariate model was developed using a linear autoregressive integrated moving average (ARIMA. This method’s performance is well studied for a large number of prediction problems. The other is a multivariate model developed using a nonlinear autoregressive exogenous artificial neural network (NARX. This uses the variables: barometric pressure, air temperature, wind direction and solar radiation or relative humidity, as well as delayed wind speed. Both models were developed from two databases from two sites: an hourly average measurements database from La Mata, Oaxaca, Mexico, and a ten minute average measurements database from Metepec, Hidalgo, Mexico. The main objective was to compare the impact of the various meteorological variables on the performance of the multivariate model of wind speed prediction with respect to the high performance univariate linear model. The NARX model gave better results with improvements on the ARIMA model of between 5.5% and 10. 6% for the hourly database and of between 2.3% and 12.8% for the ten minute database for mean absolute error and mean squared error, respectively.

  7. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  8. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  9. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  10. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Comparison of the CATHENA model of Gentilly-2 end shield cooling system predictions to station data

    Energy Technology Data Exchange (ETDEWEB)

    Zagre, G.; Sabourin, G. [Candu Energy Inc., Montreal, Quebec (Canada); Chapados, S. [Hydro-Quebec, Montreal, Quebec (Canada)

    2012-07-01

    As part of the Gentilly-2 Refurbishment Project, Hydro-Quebec has elected to perform the End Shield Cooling Safety Analysis. A CATHENA model of Gentilly-2 End Shield Cooling System was developed for this purpose. This model includes new elements compared to other CANDU6 End Shield Cooling models such as a detailed heat exchanger and control logic model. In order to test the model robustness and accuracy, the model predictions were compared with plant measurements.This paper summarizes this comparison between the model predictions and the station measurements. It is shown that the CATHENA model is flexible and accurate enough to predict station measurements for critical parameters, and the detailed heat exchanger model allows reproducing station transients. (author)

  12. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  13. Our calibrated model has poor predictive value: An example from the petroleum industry

    Energy Technology Data Exchange (ETDEWEB)

    Carter, J.N. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)]. E-mail: j.n.carter@ic.ac.uk; Ballester, P.J. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); Tavassoli, Z. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); King, P.R. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)

    2006-10-15

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

  14. Our calibrated model has poor predictive value: An example from the petroleum industry

    International Nuclear Information System (INIS)

    Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.

    2006-01-01

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not

  15. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  16. Characterisation of polycrystal deformation by numerical modelling and neutron diffraction measurements

    International Nuclear Information System (INIS)

    Clausen, B.

    1997-09-01

    The deformation of polycrystals are modelled using three micron mechanic models; the Taylor model, the Sachs model and Hutchinson's self-consistent (SC) model. The predictions of the rigid plastic Taylor and Sachs models are compared with the predictions of the SC model. As expected, the results of the SC model is about half-way between the upper- and lower-bound models. The influence of the elastic anisotropy is investigated by comparing the SC predictions for aluminium, copper and a hypothetical material (Hybrid) with the elastic anisotropy of copper and the Young's modulus and hardening behaviour of aluminium. It is concluded that the effect of the elastic anisotropy is limited to the very early stages of plasticity, as the deformation pattern is almost identical for the three materials at higher strains. The predictions of the three models are evaluated by neutron diffraction measurements of elastic lattice strains in grain sub-sets within the polycrystal. The two rigid plastic models do not include any material parameters and therefore the predictions of the SC model is more accurate and more detailed than the predictions of the Taylor and Sachs models. The SC model is used to determine the most suitable reflection for technological applications of neutron diffraction, where focus is on the volume average stress state in engineering components. To be able to successfully to convert the measured elastic lattice strains for a specific reflection into overall volume average stresses, there must be a linear relation between the lattice strain of the reflection and the overall stress. According to the model predictions the 311-reflection is the most suitable reflection as it shows the smallest deviations from linearity and thereby also the smallest build-up of residual strains. The model predictions have pin pointed that the selection of the reflection is crucial for the validity of stresses calculated from the measured elastic lattice strains. (au) 14 tabs., 41

  17. g-2 and α(MZ2): Status of the Standard Model predictions

    International Nuclear Information System (INIS)

    Teubner, T.; Hagiwara, K.; Liao, R.; Martin, A.D.; Nomura, D.

    2012-01-01

    We review the status of the Standard Model prediction of the anomalous magnetic moment of the muon and the electromagnetic coupling at the scale M Z . Recent progress in the evaluation of the hadronic contributions have consolidated the prediction of both quantities. For g-2, the discrepancy between the measurement from BNL and the Standard Model prediction stands at a level of more than three standard deviations.

  18. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  19. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  20. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    Science.gov (United States)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  1. Mathematical Model for Prediction of Flexural Strength of Mound ...

    African Journals Online (AJOL)

    The mound soil-cement blended proportions were mathematically optimized by using scheffe's approach and the optimization model developed. A computer program predicting the mix proportion for the model was written. The optimal proportion by the program was used prepare beam samples measuring 150mm x 150mm ...

  2. Measurement and prediction of residual stress in a bead-on-plate weld benchmark specimen

    International Nuclear Information System (INIS)

    Ficquet, X.; Smith, D.J.; Truman, C.E.; Kingston, E.J.; Dennis, R.J.

    2009-01-01

    This paper presents measurements and predictions of the residual stresses generated by laying a single weld bead on a flat, austenitic stainless steel plate. The residual stress field that is created is strongly three-dimensional and is considered representative of that found in a repair weld. Through-thickness measurements are made using the deep hole drilling technique, and near-surface measurements are made using incremental centre hole drilling. Measurements are compared to predictions at the same locations made using finite element analysis incorporating an advanced, non-linear kinematic hardening model. The work was conducted as part of an European round robin exercise, coordinated as part of the NeT network. Overall, there was broad agreement between measurements and predictions, but there were notable differences

  3. Prediction model for oxide thickness on aluminum alloy cladding during irradiation

    International Nuclear Information System (INIS)

    Kim, Yeon Soo; Hofman, G.L.; Hanan, N.A.; Snelgrove, J.L.

    2003-01-01

    An empirical model predicting the oxide film thickness on aluminum alloy cladding during irradiation has been developed as a function of irradiation time, temperature, heat flux, pH, and coolant flow rate. The existing models in the literature are neither consistent among themselves nor fit the measured data very well. They also lack versatility for various reactor situations such as a pH other than 5, high coolant flow rates, and fuel life longer than ∼1200 hrs. Particularly, they were not intended for use in irradiation situations. The newly developed model is applicable to these in-reactor situations as well as ex-reactor tests, and has a more accurate prediction capability. The new model demonstrated with consistent predictions to the measured data of UMUS and SIMONE fuel tests performed in the HFR, Petten, tests results from the ORR, and IRIS tests from the OSIRIS and to the data from the out-of-pile tests available in the literature as well. (author)

  4. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires

    DEFF Research Database (Denmark)

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik

    2016-01-01

    responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least......-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. RESULTS: A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary...

  5. Comparison of ICRP Publication 30 lung model-based predictions with measured bioassay data for airborne natural UO2 exposure

    International Nuclear Information System (INIS)

    Thind, K.S.

    1987-01-01

    In this paper a comparison is made between the build-up of U thorax burdens and the predicted total lung (lung and lymph) burden, based on the lung model provided in ICRP Publication 30 for a group of 29 atomic radiation workers at a Canadian fuel fabrication facility. A similar comparison is made between the predicted ratio of the total lung burden to urinary excretion and the ratio obtained from bioassay data. The study period for the comparison is 5 y. The inhalation input for the lung model calculations was derived from air-sampling data and the choice of particle size activity median aerodynamic diameter (AMAD) was guided by particle size measurements made at representative work locations. The pulmonary clearance half-times studied were 100, 250 and 500 d. For the purpose of this comparison, averaged exposure and averaged bioassay data for the group were used. This comparison indicates that for the conditions of this facility, the assumption of a 500-d pulmonary clearance half-time and a particle size of 1 micron (AMAD) may be too conservative. It is suggested that measurements of air concentrations and particle size used as input parameters for the ICRP Publication 30 lung model may be used to calculate bioassay parameters which may then be tested against bioassay data obtained as part of an operational health physics program, thereby giving a useful step towards defining a derived air concentration value for U in the workplace

  6. Prediction Model for Relativistic Electrons at Geostationary Orbit

    Science.gov (United States)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  7. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  8. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  9. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  10. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  11. Predictive modeling of mosquito abundance and dengue transmission in Kenya

    Science.gov (United States)

    Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.

    2017-12-01

    Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.

  12. Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.

    Science.gov (United States)

    Abdullah, Ali M; Hussona, Salah El-dien

    2013-10-01

    Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.

  13. Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-07-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. (c) 2010 SETAC.

  14. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  15. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  16. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  17. Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.

    Science.gov (United States)

    Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A

    2017-04-01

    Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results

  18. Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM

    Directory of Open Access Journals (Sweden)

    Łapczyński Mariusz

    2014-08-01

    Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.

  19. A two-parameter model to predict fracture in the transition

    International Nuclear Information System (INIS)

    DeAquino, C.T.; Landes, J.D.; McCabe, D.E.

    1995-01-01

    A model is proposed that uses a numerical characterization of the crack tip stress field modified by the J - Q constraint theory and a weak link assumption to predict fracture behavior in the transition for reactor vessel steels. This model predicts the toughness scatter band for a component model from a toughness scatter band measured on a test specimen geometry. The model has been applied previously to two-dimensional through cracks. Many applications to actual components structures involve three-dimensional surface flaws. These cases require a more difficult level of analysis and need additional information. In this paper, both the current model for two-dimensional cracks and an approach needed to extend the model for the prediction of transition fracture behavior in three-dimensional surface flaws are discussed. Examples are presented to show how the model can be applied and in some cases to compare with other test results. (author). 13 refs., 7 figs

  20. Test of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.; Bateman, G.; Boucher, D.

    2001-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  1. Tests of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.R.; Bateman, G.; Boucher, D.

    1999-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  2. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  3. Micrometeorological measurement of hexachlorobenzene and polychlorinated biphenyl compound air-water gas exchange in Lake Superior and comparison to model predictions

    Directory of Open Access Journals (Sweden)

    M. D. Rowe

    2012-05-01

    Full Text Available Air-water exchange fluxes of persistent, bioaccumulative and toxic (PBT substances are frequently estimated using the Whitman two-film (W2F method, but micrometeorological flux measurements of these compounds over water are rarely attempted. We measured air-water exchange fluxes of hexachlorobenzene (HCB and polychlorinated biphenyls (PCBs on 14 July 2006 in Lake Superior using the modified Bowen ratio (MBR method. Measured fluxes were compared to estimates using the W2F method, and to estimates from an Internal Boundary Layer Transport and Exchange (IBLTE model that implements the NOAA COARE bulk flux algorithm and gas transfer model. We reveal an inaccuracy in the estimate of water vapor transfer velocity that is commonly used with the W2F method for PBT flux estimation, and demonstrate the effect of use of an improved estimation method. Flux measurements were conducted at three stations with increasing fetch in offshore flow (15, 30, and 60 km in southeastern Lake Superior. This sampling strategy enabled comparison of measured and predicted flux, as well as modification in near-surface atmospheric concentration with fetch, using the IBLTE model. Fluxes estimated using the W2F model were compared to fluxes measured by MBR. In five of seven cases in which the MBR flux was significantly greater than zero, concentration increased with fetch at 1-m height, which is qualitatively consistent with the measured volatilization flux. As far as we are aware, these are the first reported ship-based micrometeorological air-water exchange flux measurements of PCBs.

  4. Two-dimensional NMR measurement and point dipole model prediction of paramagnetic shift tensors in solids

    Energy Technology Data Exchange (ETDEWEB)

    Walder, Brennan J.; Davis, Michael C.; Grandinetti, Philip J. [Department of Chemistry, Ohio State University, 100 West 18th Avenue, Columbus, Ohio 43210 (United States); Dey, Krishna K. [Department of Physics, Dr. H. S. Gour University, Sagar, Madhya Pradesh 470003 (India); Baltisberger, Jay H. [Division of Natural Science, Mathematics, and Nursing, Berea College, Berea, Kentucky 40403 (United States)

    2015-01-07

    A new two-dimensional Nuclear Magnetic Resonance (NMR) experiment to separate and correlate the first-order quadrupolar and chemical/paramagnetic shift interactions is described. This experiment, which we call the shifting-d echo experiment, allows a more precise determination of tensor principal components values and their relative orientation. It is designed using the recently introduced symmetry pathway concept. A comparison of the shifting-d experiment with earlier proposed methods is presented and experimentally illustrated in the case of {sup 2}H (I = 1) paramagnetic shift and quadrupolar tensors of CuCl{sub 2}⋅2D{sub 2}O. The benefits of the shifting-d echo experiment over other methods are a factor of two improvement in sensitivity and the suppression of major artifacts. From the 2D lineshape analysis of the shifting-d spectrum, the {sup 2}H quadrupolar coupling parameters are 〈C{sub q}〉 = 118.1 kHz and 〈η{sub q}〉 = 0.88, and the {sup 2}H paramagnetic shift tensor anisotropy parameters are 〈ζ{sub P}〉 = − 152.5 ppm and 〈η{sub P}〉 = 0.91. The orientation of the quadrupolar coupling principal axis system (PAS) relative to the paramagnetic shift anisotropy principal axis system is given by (α,β,γ)=((π)/2 ,(π)/2 ,0). Using a simple ligand hopping model, the tensor parameters in the absence of exchange are estimated. On the basis of this analysis, the instantaneous principal components and orientation of the quadrupolar coupling are found to be in excellent agreement with previous measurements. A new point dipole model for predicting the paramagnetic shift tensor is proposed yielding significantly better agreement than previously used models. In the new model, the dipoles are displaced from nuclei at positions associated with high electron density in the singly occupied molecular orbital predicted from ligand field theory.

  5. Predicting fiber refractive index from a measured preform index profile

    Science.gov (United States)

    Kiiveri, P.; Koponen, J.; Harra, J.; Novotny, S.; Husu, H.; Ihalainen, H.; Kokki, T.; Aallos, V.; Kimmelma, O.; Paul, J.

    2018-02-01

    When producing fiber lasers and amplifiers, silica glass compositions consisting of three to six different materials are needed. Due to the varying needs of different applications, substantial number of different glass compositions are used in the active fiber structures. Often it is not possible to find material parameters for theoretical models to estimate thermal and mechanical properties of those glass compositions. This makes it challenging to predict accurately fiber core refractive index values, even if the preform index profile is measured. Usually the desired fiber refractive index value is achieved experimentally, which is expensive. To overcome this problem, we analyzed statistically the changes between the measured preform and fiber index values. We searched for correlations that would help to predict the Δn-value change from preform to fiber in a situation where we don't know the values of the glass material parameters that define the change. Our index change models were built using the data collected from preforms and fibers made by the Direct Nanoparticle Deposition (DND) technology.

  6. Measuring and prediction of global solar ultraviolet radiation (0295-0385 μ m) under clear and cloudless skies

    International Nuclear Information System (INIS)

    Wright, Jaime

    2008-01-01

    Values of global solar ultraviolet radiation were measured with an ultraviolet radiometer and also predicted with a atmospheric spectral model. The values obtained with the atmospheric spectral model, based physically, were analyzed and compared with experimental values measured in situ. Measurements were performed for different zenith angles in conditions of clear skies in Heredia, Costa Rica. The necessary input data include latitude, altitude, surface albedo, Earth-Sun distance, as well as atmospheric characteristics: atmospheric turbidity, precipitable water and atmospheric ozone. The comparison between measured and predicted values have been successful. (author) [es

  7. Characterisation of polycrystal deformation by numerical modelling and neutron diffraction measurements

    DEFF Research Database (Denmark)

    Clausen, Bjørn

    of calculated and measured lattice strains are made for three different materials; alu-minium, copper and austenitic stainless steel. The predictions of the self-consistent model is more accurate and detailed than the predictions of the Taylor and Sachs models, though some discrepancies are noted for some...... to the Sachs model. The influence of the elastic anisotropy is investigated by comparing the self-consistent predictions for aluminium, copper and a hypothetical material (hybrid) with the elastic anisotropy of copper and the Young’s modulus and work hardening behaviour of aluminium. It is concluded......, that the effect of the elastic anisotropy is limited to the very early stages of plasticity (εP materials at higher strains. The predictions of the three models are evaluated by neutron diffraction mea-surements of elastic lattice strains...

  8. Thigh muscle volume predicted by anthropometric measurements and correlated with physical function in the older adults.

    Science.gov (United States)

    Chen, B B; Shih, T T F; Hsu, C Y; Yu, C W; Wei, S Y; Chen, C Y; Wu, C H; Chen, C Y

    2011-06-01

    (1) to correlate thigh muscle volume measured by magnetic resonance image (MRI) with anthropometric measurements and physical function in elderly subjects; (2) to predict MRI-measured thigh muscle volume using anthropometric measurements and physical functional status in elderly subjects. Cross-sectional, nonrandomized study. Outpatient clinic in Taiwan. Sixty-nine elderly subjects (33 men and 36 women) aged 65 and older. The anthropometric data (including body height, body weight, waist size, and thigh circumference), physical activity and function (including grip strength, bilateral quadriceps muscle power, the up and go test, chair rise, and five meters walk time) and bioelectrical impedance analysis data (including total body fat mass, fat-free mass, and predictive muscle size) were measured. MRI-measured muscle volume of both thighs was used as the reference standard. The MRI-measured thigh volume was positively correlated with all anthropometric data, quadriceps muscle power and the up and go test as well as fat-free mass and predictive muscle mass, whereas it was negatively associated with age and walk time. In predicting thigh muscle volume, the variables of age, gender, body weight, and thigh circumference were significant predictors in the linear regression model: Muscle volume (cm3) =4226.3-42.5 × Age (year)-955.7 × gender (male=1, female=2) + 45.9 × body weight(kg) + 60.0 × thigh circumference (cm) (r2 = 0.745, P estimate = 581.6 cm3). The current work provides evidence of a strong relationship between thigh muscle volume and physical function in the elderly. We also developed a prediction equation model using anthropometric measurements. This model is a simple and noninvasive method for everyday clinical practice and follow-up.

  9. A deep auto-encoder model for gene expression prediction.

    Science.gov (United States)

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  10. Longitudinal modeling to predict vital capacity in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Jahandideh, Samad; Taylor, Albert A; Beaulieu, Danielle; Keymer, Mike; Meng, Lisa; Bian, Amy; Atassi, Nazem; Andrews, Jinsy; Ennist, David L

    2018-05-01

    Death in amyotrophic lateral sclerosis (ALS) patients is related to respiratory failure, which is assessed in clinical settings by measuring vital capacity. We developed ALS-VC, a modeling tool for longitudinal prediction of vital capacity in ALS patients. A gradient boosting machine (GBM) model was trained using the PRO-ACT (Pooled Resource Open-access ALS Clinical Trials) database of over 10,000 ALS patient records. We hypothesized that a reliable vital capacity predictive model could be developed using PRO-ACT. The model was used to compare FVC predictions with a 30-day run-in period to predictions made from just baseline. The internal root mean square deviations (RMSD) of the run-in and baseline models were 0.534 and 0.539, respectively, across the 7L FVC range captured in PRO-ACT. The RMSDs of the run-in and baseline models using an unrelated, contemporary external validation dataset (0.553 and 0.538, respectively) were comparable to the internal validation. The model was shown to have similar accuracy for predicting SVC (RMSD = 0.562). The most important features for both run-in and baseline models were "Baseline forced vital capacity" and "Days since baseline." We developed ALS-VC, a GBM model trained with the PRO-ACT ALS dataset that provides vital capacity predictions generalizable to external datasets. The ALS-VC model could be helpful in advising and counseling patients, and, in clinical trials, it could be used to generate virtual control arms against which observed outcomes could be compared, or used to stratify patients into slowly, average, and rapidly progressing subgroups.

  11. A neighborhood statistics model for predicting stream pathogen indicator levels.

    Science.gov (United States)

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  12. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  13. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  14. Predicting field weed emergence with empirical models and soft computing techniques

    Science.gov (United States)

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  15. Evaluation of Deep Learning Models for Predicting CO2 Flux

    Science.gov (United States)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  16. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    Science.gov (United States)

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  17. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  18. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  19. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    Science.gov (United States)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  20. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  1. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    Science.gov (United States)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  2. APPRAISAL OF THE SNAP MODEL FOR PREDICTING NITROGEN MINERALIZATION IN TROPICAL SOILS UNDER EUCALYPTUS

    Directory of Open Access Journals (Sweden)

    Philip James Smethurst

    2015-04-01

    Full Text Available The Soil Nitrogen Availability Predictor (SNAP model predicts daily and annual rates of net N mineralization (NNM based on daily weather measurements, daily predictions of soil water and soil temperature, and on temperature and moisture modifiers obtained during aerobic incubation (basal rate. The model was based on in situ measurements of NNM in Australian soils under temperate climate. The purpose of this study was to assess this model for use in tropical soils under eucalyptus plantations in São Paulo State, Brazil. Based on field incubations for one month in three, NNM rates were measured at 11 sites (0-20 cm layer for 21 months. The basal rate was determined in in situ incubations during moist and warm periods (January to March. Annual rates of 150-350 kg ha-1 yr-1 NNM predicted by the SNAP model were reasonably accurate (R2 = 0.84. In other periods, at lower moisture and temperature, NNM rates were overestimated. Therefore, if used carefully, the model can provide adequate predictions of annual NNM and may be useful in practical applications. For NNM predictions for shorter periods than a year or under suboptimal incubation conditions, the temperature and moisture modifiers need to be recalibrated for tropical conditions.

  3. Gambling and the Reasoned Action Model: Predicting Past Behavior, Intentions, and Future Behavior.

    Science.gov (United States)

    Dahl, Ethan; Tagler, Michael J; Hohman, Zachary P

    2018-03-01

    Gambling is a serious concern for society because it is highly addictive and is associated with a myriad of negative outcomes. The current study applied the Reasoned Action Model (RAM) to understand and predict gambling intentions and behavior. Although prior studies have taken a reasoned action approach to understand gambling, no prior study has fully applied the RAM or used the RAM to predict future gambling. Across two studies the RAM was used to predict intentions to gamble, past gambling behavior, and future gambling behavior. In study 1 the model significantly predicted intentions and past behavior in both a college student and Amazon Mechanical Turk sample. In study 2 the model predicted future gambling behavior, measured 2 weeks after initial measurement of the RAM constructs. This study stands as the first to show the utility of the RAM in predicting future gambling behavior. Across both studies, attitudes and perceived normative pressure were the strongest predictors of intentions to gamble. These findings provide increased understanding of gambling and inform the development of gambling interventions based on the RAM.

  4. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. Prediction of insulin resistance with anthropometric measures: lessons from a large adolescent population

    Directory of Open Access Journals (Sweden)

    Wedin WK

    2012-07-01

    Full Text Available William K Wedin,1 Lizmer Diaz-Gimenez,1 Antonio J Convit1,21Department of Psychiatry, NYU School of Medicine, New York, NY, USA; 2Nathan Kline Institute, Orangeburg, NY, USAObjective: The aim of this study was to describe the minimum number of anthropometric measures that will optimally predict insulin resistance (IR and to characterize the utility of these measures among obese and nonobese adolescents.Research design and methods: Six anthropometric measures (selected from three categories: central adiposity, weight, and body composition were measured from 1298 adolescents attending two New York City public high schools. Body composition was determined by bioelectric impedance analysis (BIA. The homeostatic model assessment of IR (HOMA-IR, based on fasting glucose and insulin concentrations, was used to estimate IR. Stepwise linear regression analyses were performed to predict HOMA-IR based on the six selected measures, while controlling for age.Results: The stepwise regression retained both waist circumference (WC and percentage of body fat (BF%. Notably, BMI was not retained. WC was a stronger predictor of HOMA-IR than BMI was. A regression model using solely WC performed best among the obese II group, while a model using solely BF% performed best among the lean group. Receiver operator characteristic curves showed the WC and BF% model to be more sensitive in detecting IR than BMI, but with less specificity.Conclusion: WC combined with BF% was the best predictor of HOMA-IR. This finding can be attributed partly to the ability of BF% to model HOMA-IR among leaner participants and to the ability of WC to model HOMA-IR among participants who are more obese. BMI was comparatively weak in predicting IR, suggesting that assessments that are more comprehensive and include body composition analysis could increase detection of IR during adolescence, especially among those who are lean, yet insulin-resistant.Keywords: BMI, bioelectrical impedance

  6. Using the area under the curve to reduce measurement error in predicting young adult blood pressure from childhood measures.

    Science.gov (United States)

    Cook, Nancy R; Rosner, Bernard A; Chen, Wei; Srinivasan, Sathanur R; Berenson, Gerald S

    2004-11-30

    Tracking correlations of blood pressure, particularly childhood measures, may be attenuated by within-person variability. Combining multiple measurements can reduce this error substantially. The area under the curve (AUC) computed from longitudinal growth curve models can be used to improve the prediction of young adult blood pressure from childhood measures. Quadratic random-effects models over unequally spaced repeated measures were used to compute the area under the curve separately within the age periods 5-14 and 20-34 years in the Bogalusa Heart Study. This method adjusts for the uneven age distribution and captures the underlying or average blood pressure, leading to improved estimates of correlation and risk prediction. Tracking correlations were computed by race and gender, and were approximately 0.6 for systolic, 0.5-0.6 for K4 diastolic, and 0.4-0.6 for K5 diastolic blood pressure. The AUC can also be used to regress young adult blood pressure on childhood blood pressure and childhood and young adult body mass index (BMI). In these data, while childhood blood pressure and young adult BMI were generally directly predictive of young adult blood pressure, childhood BMI was negatively correlated with young adult blood pressure when childhood blood pressure was in the model. In addition, racial differences in young adult blood pressure were reduced, but not eliminated, after controlling for childhood blood pressure, childhood BMI, and young adult BMI, suggesting that other genetic or lifestyle factors contribute to this difference. 2004 John Wiley & Sons, Ltd.

  7. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  8. Modeling and prediction of retardance in citric acid coated ferrofluid using artificial neural network

    International Nuclear Information System (INIS)

    Lin, Jing-Fung; Sheu, Jer-Jia

    2016-01-01

    Citric acid coated (citrate-stabilized) magnetite (Fe 3 O 4 ) magnetic nanoparticles have been conducted and applied in the biomedical fields. Using Taguchi-based measured retardances as the training data, an artificial neural network (ANN) model was developed for the prediction of retardance in citric acid (CA) coated ferrofluid (FF). According to the ANN simulation results in the training stage, the correlation coefficient between predicted retardances and measured retardances was found to be as high as 0.9999998. Based on the well-trained ANN model, the predicted retardance at excellent program from Taguchi method showed less error of 2.17% compared with a multiple regression (MR) analysis of statistical significance. Meanwhile, the parameter analysis at excellent program by the ANN model had the guiding significance to find out a possible program for the maximum retardance. It was concluded that the proposed ANN model had high ability for the prediction of retardance in CA coated FF. - Highlights: • The feedforward ANN is applied for modeling of retardance in CA coated FFs. • ANN can predict the retardance at excellent program with acceptable error to MR. • The proposed ANN has high ability for the prediction of retardance.

  9. Evaluation of probabilistic flow predictions in sewer systems using grey box models and a skill score criterion

    DEFF Research Database (Denmark)

    Thordarson, Fannar Ørn; Breinholt, Anders; Møller, Jan Kloppenborg

    2012-01-01

    term and a diffusion term, respectively accounting for the deterministic and stochastic part of the models. Furthermore, a distinction is made between the process noise and the observation noise. We compare five different model candidates’ predictive performances that solely differ with respect...... to the diffusion term description up to a 4 h prediction horizon by adopting the prediction performance measures; reliability, sharpness and skill score to pinpoint the preferred model. The prediction performance of a model is reliable if the observed coverage of the prediction intervals corresponds to the nominal...... coverage of the prediction intervals, i.e. the bias between these coverages should ideally be zero. The sharpness is a measure of the distance between the lower and upper prediction limits, and skill score criterion makes it possible to pinpoint the preferred model by taking into account both reliability...

  10. Predicting story goodness performance from cognitive measures following traumatic brain injury.

    Science.gov (United States)

    Lê, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan

    2012-05-01

    This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Lê, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. One hundred sixty-seven individuals with TBI participated in the study. Story retellings were analyzed using the SGI protocol. Three cognitive measures--Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001) Sorting Test, Wechsler Memory Scale--Third Edition (WMS-III; Wechsler, 1997) Working Memory Primary Index (WMI), and WMS-III Immediate Memory Primary Index (IMI)--were entered into a multiple linear regression model for each discourse measure. Two sets of regression analyses were performed, the first with the Sorting Test as the first predictor and the second with it as the last. The first set of regression analyses identified the Sorting Test and IMI as the only significant predictors of performance on measures of the SGI. The second set identified all measures as significant predictors when evaluating each step of the regression function. The cognitive variables predicted performance on the SGI measures, although there were differences in the amount of explained variance. The results (a) suggest that storytelling ability draws on a number of underlying skills and (b) underscore the importance of using discrete cognitive tasks rather than broad cognitive indices to investigate the cognitive substrates of discourse.

  11. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  12. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  13. [Application of predictive model to estimate concentrations of chemical substances in the work environment].

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata

    2010-01-01

    Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.

  14. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Measurement and prediction of dabigatran etexilate mesylate Form II solubility in mono-solvents and mixed solvents

    International Nuclear Information System (INIS)

    Xiao, Yan; Wang, Jingkang; Wang, Ting; Ouyang, Jinbo; Huang, Xin; Hao, Hongxun; Bao, Ying; Fang, Wen; Yin, Qiuxiang

    2016-01-01

    Highlights: • Solubility of DEM Form II in mono-solvents and binary solvent mixtures was measured. • Regressed UNIFAC model was used to predict the solubility in solvent mixtures. • The experimental solubility data were correlated by different models. - Abstract: UV spectrometer method was used to measure the solubility data of dabigatran etexilate mesylate (DEM) Form II in five mono-solvents (methanol, ethanol, ethane-1,2-diol, DMF, DMAC) and binary solvent mixtures of methanol and ethanol in the temperature range from 287.37 K to 323.39 K. The experimental solubility data in mono-solvents were correlated with modified Apelblat equation, van’t Hoff equation and λh equation. GSM model and Modified Jouyban-Acree model were employed to correlate the solubility data in mixed solvent systems. And Regressed UNIFAC model was used to predict the solubility of DEM Form II in the binary solvent mixtures. Results showed that the predicted data were consistent with the experimental data.

  16. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    Science.gov (United States)

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some

  17. Erratum: Probabilistic application of a fugacity model to predict triclosan fate during wastewater treatment.

    Science.gov (United States)

    Bock, Michael; Lyndall, Jennifer; Barber, Timothy; Fuchsman, Phyllis; Perruchon, Elyse; Capdevielle, Marie

    2010-10-01

    The fate and partitioning of the antimicrobial compound, triclosan, in wastewater treatment plants (WWTPs) is evaluated using a probabilistic fugacity model to predict the range of triclosan concentrations in effluent and secondary biosolids. The WWTP model predicts 84% to 92% triclosan removal, which is within the range of measured removal efficiencies (typically 70% to 98%). Triclosan is predominantly removed by sorption and subsequent settling of organic particulates during primary treatment and by aerobic biodegradation during secondary treatment. Median modeled removal efficiency due to sorption is 40% for all treatment phases and 31% in the primary treatment phase. Median modeled removal efficiency due to biodegradation is 48% for all treatment phases and 44% in the secondary treatment phase. Important factors contributing to variation in predicted triclosan concentrations in effluent and biosolids include influent concentrations, solids concentrations in settling tanks, and factors related to solids retention time. Measured triclosan concentrations in biosolids and non-United States (US) effluent are consistent with model predictions. However, median concentrations in US effluent are over-predicted with this model, suggesting that differences in some aspect of treatment practices not incorporated in the model (e.g., disinfection methods) may affect triclosan removal from effluent. Model applications include predicting changes in environmental loadings associated with new triclosan applications and supporting risk analyses for biosolids-amended land and effluent receiving waters. © 2010 SETAC.

  18. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  19. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  20. Dynamic VaR Measurement of Gold Market with SV-T-MN Model

    Directory of Open Access Journals (Sweden)

    Fenglan Li

    2017-01-01

    Full Text Available VaR (Value at Risk in the gold market was measured and predicted by combining stochastic volatility (SV model with extreme value theory. Firstly, for the fat tail and volatility persistence characteristics in gold market return series, the gold price return volatility was modeled by SV-T-MN (SV-T with Mixture-of-Normal distribution model based on state space. Secondly, future sample volatility prediction was realized by using approximate filtering algorithm. Finally, extreme value theory based on generalized Pareto distribution was applied to measure dynamic risk value (VaR of gold market return. Through the proposed model on the price of gold, empirical analysis was investigated; the results show that presented combined model can measure and predict Value at Risk of the gold market reasonably and effectively and enable investors to further understand the extreme risk of gold market and take coping strategies actively.

  1. A measurement-based method for predicting margins and uncertainties for unprotected accidents in the Integral Fast Reactor concept

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1990-01-01

    A measurement-based method for predicting the response of an LMR core to unprotected accidents has been developed. The method processes plant measurements taken at normal operation to generate a stochastic model for the core dynamics. This model can be used to predict three sigma confidence intervals for the core temperature and power response. Preliminary numerical simulations performed for EBR-2 appear promising. 6 refs., 2 figs

  2. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  3. A prediction model for the grade of liver fibrosis using magnetic resonance elastography.

    Science.gov (United States)

    Mitsuka, Yusuke; Midorikawa, Yutaka; Abe, Hayato; Matsumoto, Naoki; Moriyama, Mitsuhiko; Haradome, Hiroki; Sugitani, Masahiko; Tsuji, Shingo; Takayama, Tadatoshi

    2017-11-28

    Liver stiffness measurement (LSM) has recently become available for assessment of liver fibrosis. We aimed to develop a prediction model for liver fibrosis using clinical variables, including LSM. We performed a prospective study to compare liver fibrosis grade with fibrosis score. LSM was measured using magnetic resonance elastography in 184 patients that underwent liver resection, and liver fibrosis grade was diagnosed histologically after surgery. Using the prediction model established in the training group, we validated the classification accuracy in the independent test group. First, we determined a cut-off value for stratifying fibrosis grade using LSM in 122 patients in the training group, and correctly diagnosed fibrosis grades of 62 patients in the test group with a total accuracy of 69.3%. Next, on least absolute shrinkage and selection operator analysis in the training group, LSM (r = 0.687, P prediction model. This prediction model applied to the test group correctly diagnosed 32 of 36 (88.8%) Grade I (F0 and F1) patients, 13 of 18 (72.2%) Grade II (F2 and F3) patients, and 7 of 8 (87.5%) Grade III (F4) patients in the test group, with a total accuracy of 83.8%. The prediction model based on LSM, ICGR15, and platelet count can accurately and reproducibly predict liver fibrosis grade.

  4. Can foot anthropometric measurements predict dynamic plantar surface contact area?

    Directory of Open Access Journals (Sweden)

    Collins Natalie

    2009-10-01

    Full Text Available Abstract Background Previous studies have suggested that increased plantar surface area, associated with pes planus, is a risk factor for the development of lower extremity overuse injuries. The intent of this study was to determine if a single or combination of foot anthropometric measures could be used to predict plantar surface area. Methods Six foot measurements were collected on 155 subjects (97 females, 58 males, mean age 24.5 ± 3.5 years. The measurements as well as one ratio were entered into a stepwise regression analysis to determine the optimal set of measurements associated with total plantar contact area either including or excluding the toe region. The predicted values were used to calculate plantar surface area and were compared to the actual values obtained dynamically using a pressure sensor platform. Results A three variable model was found to describe the relationship between the foot measures/ratio and total plantar contact area (R2 = 0.77, p R2 = 0.76, p Conclusion The results of this study indicate that the clinician can use a combination of simple, reliable, and time efficient foot anthropometric measurements to explain over 75% of the plantar surface contact area, either including or excluding the toe region.

  5. External validation of the Cairns Prediction Model (CPM) to predict conversion from laparoscopic to open cholecystectomy.

    Science.gov (United States)

    Hu, Alan Shiun Yew; Donohue, Peter O'; Gunnarsson, Ronny K; de Costa, Alan

    2018-03-14

    Valid and user-friendly prediction models for conversion to open cholecystectomy allow for proper planning prior to surgery. The Cairns Prediction Model (CPM) has been in use clinically in the original study site for the past three years, but has not been tested at other sites. A retrospective, single-centred study collected ultrasonic measurements and clinical variables alongside with conversion status from consecutive patients who underwent laparoscopic cholecystectomy from 2013 to 2016 in The Townsville Hospital, North Queensland, Australia. An area under the curve (AUC) was calculated to externally validate of the CPM. Conversion was necessary in 43 (4.2%) out of 1035 patients. External validation showed an area under the curve of 0.87 (95% CI 0.82-0.93, p = 1.1 × 10 -14 ). In comparison with most previously published models, which have an AUC of approximately 0.80 or less, the CPM has the highest AUC of all published prediction models both for internal and external validation. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  6. Fourier and non-Fourier bio-heat transfer models to predict ex vivo temperature response to focused ultrasound heating

    Science.gov (United States)

    Li, Chenghai; Miao, Jiaming; Yang, Kexin; Guo, Xiasheng; Tu, Juan; Huang, Pintong; Zhang, Dong

    2018-05-01

    Although predicting temperature variation is important for designing treatment plans for thermal therapies, research in this area is yet to investigate the applicability of prevalent thermal conduction models, such as the Pennes equation, the thermal wave model of bio-heat transfer, and the dual phase lag (DPL) model. To address this shortcoming, we heated a tissue phantom and ex vivo bovine liver tissues with focused ultrasound (FU), measured the temperature response, and compared the results with those predicted by these models. The findings show that, for a homogeneous-tissue phantom, the initial temperature increase is accurately predicted by the Pennes equation at the onset of FU irradiation, although the prediction deviates from the measured temperature with increasing FU irradiation time. For heterogeneous liver tissues, the predicted response is closer to the measured temperature for the non-Fourier models, especially the DPL model. Furthermore, the DPL model accurately predicts the temperature response in biological tissues because it increases the phase lag, which characterizes microstructural thermal interactions. These findings should help to establish more precise clinical treatment plans for thermal therapies.

  7. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  8. Chemical Thermodynamics of Aqueous Atmospheric Aerosols: Modeling and Microfluidic Measurements

    Science.gov (United States)

    Nandy, L.; Dutcher, C. S.

    2017-12-01

    Accurate predictions of gas-liquid-solid equilibrium phase partitioning of atmospheric aerosols by thermodynamic modeling and measurements is critical for determining particle composition and internal structure at conditions relevant to the atmosphere. Organic acids that originate from biomass burning, and direct biogenic emission make up a significant fraction of the organic mass in atmospheric aerosol particles. In addition, inorganic compounds like ammonium sulfate and sea salt also exist in atmospheric aerosols, that results in a mixture of single, double or triple charged ions, and non-dissociated and partially dissociated organic acids. Statistical mechanics based on a multilayer adsorption isotherm model can be applied to these complex aqueous environments for predictions of thermodynamic properties. In this work, thermodynamic analytic predictive models are developed for multicomponent aqueous solutions (consisting of partially dissociating organic and inorganic acids, fully dissociating symmetric and asymmetric electrolytes, and neutral organic compounds) over the entire relative humidity range, that represent a significant advancement towards a fully predictive model. The model is also developed at varied temperatures for electrolytes and organic compounds the data for which are available at different temperatures. In addition to the modeling approach, water loss of multicomponent aerosol particles is measured by microfluidic experiments to parameterize and validate the model. In the experimental microfluidic measurements, atmospheric aerosol droplet chemical mimics (organic acids and secondary organic aerosol (SOA) samples) are generated in microfluidic channels and stored and imaged in passive traps until dehydration to study the influence of relative humidity and water loss on phase behavior.

  9. Biodynamic modelling and the prediction of accumulated trace metal concentrations in the polychaete Arenicola marina

    International Nuclear Information System (INIS)

    Casado-Martinez, M. Carmen; Smith, Brian D.; DelValls, T. Angel; Luoma, Samuel N.; Rainbow, Philip S.

    2009-01-01

    The use of biodynamic models to understand metal uptake directly from sediments by deposit-feeding organisms still represents a special challenge. In this study, accumulated concentrations of Cd, Zn and Ag predicted by biodynamic modelling in the lugworm Arenicola marina have been compared to measured concentrations in field populations in several UK estuaries. The biodynamic model predicted accumulated field Cd concentrations remarkably accurately, and predicted bioaccumulated Ag concentrations were in the range of those measured in lugworms collected from the field. For Zn the model showed less but still good comparability, accurately predicting Zn bioaccumulation in A. marina at high sediment concentrations but underestimating accumulated Zn in the worms from sites with low and intermediate levels of Zn sediment contamination. Therefore, it appears that the physiological parameters experimentally derived for A. marina are applicable to the conditions encountered in these environments and that the assumptions made in the model are plausible. - Biodynamic modelling predicts accumulated field concentrations of Ag, Cd and Zn in the deposit-feeding polychaete Arenicola marina.

  10. Modeling and predicting low-speed vehicle emissions as a function of driving kinematics.

    Science.gov (United States)

    Hao, Lijun; Chen, Wei; Li, Lei; Tan, Jianwei; Wang, Xin; Yin, Hang; Ding, Yan; Ge, Yunshan

    2017-05-01

    An instantaneous emission model was developed to model and predict the real driving emissions of the low-speed vehicles. The emission database used in the model was measured by using portable emission measurement system (PEMS) under actual traffic conditions in the rural area, and the characteristics of the emission data were determined in relation to the driving kinematics (speed and acceleration) of the low-speed vehicle. The input of the emission model is driving cycle, and the model requires instantaneous vehicle speed and acceleration levels as input variables and uses them to interpolate the pollutant emission rate maps to calculate the transient pollutant emission rates, which will be accumulated to calculate the total emissions released during the whole driving cycle. And the vehicle fuel consumption was determined through the carbon balance method. The model predicted the emissions and fuel consumption of an in-use low-speed vehicle type model, which agreed well with the measured data. Copyright © 2016. Published by Elsevier B.V.

  11. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  12. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  13. Development of a prognostic model for predicting spontaneous singleton preterm birth.

    Science.gov (United States)

    Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen

    2012-10-01

    To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (pvalues of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Measurements and predictions of strain pole figures for uniaxially compressed stainless steel

    International Nuclear Information System (INIS)

    Larsson, C.; Clausen, B.; Holden, T.M.; Bourke, M.A.M.

    2004-01-01

    Strain pole figures representative of residual intergranular strains were determined from an -2.98% uniaxially compressed austenitic stainless steel sample. The measurements were made using neutron diffraction on the recently commissioned Spectrometer for Materials Research at Temperature and Stress (SMARTS) at Los Alamos National Laboratory, USA. The measurements were compared with predictions from an elasto-plastic self-consistent model and found to be in good agreement

  15. Measurements and predictions of strain pole figures for uniaxially compressed stainless steel

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, C. [Division of Engineering Materials, Department of Mechanical Engineering, Linkoeping University, 58183 Linkoeping (Sweden)]. E-mail: clarsson@cfl.rr.com; Clausen, B. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Holden, T.M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Bourke, M.A.M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2004-09-15

    Strain pole figures representative of residual intergranular strains were determined from an -2.98% uniaxially compressed austenitic stainless steel sample. The measurements were made using neutron diffraction on the recently commissioned Spectrometer for Materials Research at Temperature and Stress (SMARTS) at Los Alamos National Laboratory, USA. The measurements were compared with predictions from an elasto-plastic self-consistent model and found to be in good agreement.

  16. Noise model for serrated trailing edges compared to wind tunnel measurements

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Shen, Wen Zhong

    2016-01-01

    A new CFD RANS based method to predict the far field sound pressure emitted from an aerofoil with serrated trailing edge has been developed. The model was validated by comparison to measurements conducted in the Virginia Tech Stability Wind Tunnel. The model predicted 3 dB lower sound pressure...... levels, but the tendencies for the different configurations were predicted correctly. Therefore the model can be used to optimise the serration geometry. A disadvantage of the new model is that the computational costs are significantly higher than for the Amiet model for a straight trailing edge. However...

  17. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  18. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    Directory of Open Access Journals (Sweden)

    Jaebeom Lee

    2018-05-01

    Full Text Available Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  19. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    Science.gov (United States)

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  20. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  2. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  3. Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band

    Science.gov (United States)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.

    2004-01-01

    The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.

  4. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  5. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  6. A Long-Term Prediction Model of Beijing Haze Episodes Using Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoping Yang

    2016-01-01

    Full Text Available The rapid industrial development has led to the intermittent outbreak of pm2.5 or haze in developing countries, which has brought about great environmental issues, especially in big cities such as Beijing and New Delhi. We investigated the factors and mechanisms of haze change and present a long-term prediction model of Beijing haze episodes using time series analysis. We construct a dynamic structural measurement model of daily haze increment and reduce the model to a vector autoregressive model. Typical case studies on 886 continuous days indicate that our model performs very well on next day’s Air Quality Index (AQI prediction, and in severely polluted cases (AQI ≥ 300 the accuracy rate of AQI prediction even reaches up to 87.8%. The experiment of one-week prediction shows that our model has excellent sensitivity when a sudden haze burst or dissipation happens, which results in good long-term stability on the accuracy of the next 3–7 days’ AQI prediction.

  7. Predictive Models of the Hydrological Regime of Unregulated Streams in Arizona

    Science.gov (United States)

    Anning, David W.; Parker, John T.C.

    2009-01-01

    , subsequent decisions were made according to the classification tree and explanatory variables to determine the hydrological regime of the reach as being perennial, nearly perennial, weakly perennial, or nonperennial. Using model calibration data, misclassification rates for each model were 17 percent for the Plateau Uplands, 15 percent for the Central Highlands, and 14 percent for the Basin and Range Lowlands models. The actual misclassification rate may be higher; however, the model has not been field verified for a full error assessment. The calibrated models were used to classify stream reaches for which the Arizona Department of Environmental Quality had collected miscellaneous discharge measurements. A total of 5,080 measurements at 696 sites were routed through the appropriate classification tree to predict the hydrological regime of the reaches in which the measurements were made. The predictions resulted in classification of all stream reaches as perennial or nonperennial; no reaches were predicted as nearly perennial or weakly perennial. The percentages of sites predicted as being perennial and nonperennial, respectively, were 77 and 23 for the Plateau Uplands, 87 and 13 for the Central Highlands, and 76 and 24 for the Basin and Range Lowlands.

  8. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  9. Measures of Microbial Biomass for Soil Carbon Decomposition Models

    Science.gov (United States)

    Mayes, M. A.; Dabbs, J.; Steinweg, J. M.; Schadt, C. W.; Kluber, L. A.; Wang, G.; Jagadamma, S.

    2014-12-01

    Explicit parameterization of the decomposition of plant inputs and soil organic matter by microbes is becoming more widely accepted in models of various complexity, ranging from detailed process models to global-scale earth system models. While there are multiple ways to measure microbial biomass, chloroform fumigation-extraction (CFE) is commonly used to parameterize models.. However CFE is labor- and time-intensive, requires toxic chemicals, and it provides no specific information about the composition or function of the microbial community. We investigated correlations between measures of: CFE; DNA extraction yield; QPCR base-gene copy numbers for Bacteria, Fungi and Archaea; phospholipid fatty acid analysis; and direct cell counts to determine the potential for use as proxies for microbial biomass. As our ultimate goal is to develop a reliable, more informative, and faster methods to predict microbial biomass for use in models, we also examined basic soil physiochemical characteristics including texture, organic matter content, pH, etc. to identify multi-factor predictive correlations with one or more measures of the microbial community. Our work will have application to both microbial ecology studies and the next generation of process and earth system models.

  10. A consensus approach for estimating the predictive accuracy of dynamic models in biology.

    Science.gov (United States)

    Villaverde, Alejandro F; Bongard, Sophia; Mauch, Klaus; Müller, Dirk; Balsa-Canto, Eva; Schmid, Joachim; Banga, Julio R

    2015-04-01

    Mathematical models that predict the complex dynamic behaviour of cellular networks are fundamental in systems biology, and provide an important basis for biomedical and biotechnological applications. However, obtaining reliable predictions from large-scale dynamic models is commonly a challenging task due to lack of identifiability. The present work addresses this challenge by presenting a methodology for obtaining high-confidence predictions from dynamic models using time-series data. First, to preserve the complex behaviour of the network while reducing the number of estimated parameters, model parameters are combined in sets of meta-parameters, which are obtained from correlations between biochemical reaction rates and between concentrations of the chemical species. Next, an ensemble of models with different parameterizations is constructed and calibrated. Finally, the ensemble is used for assessing the reliability of model predictions by defining a measure of convergence of model outputs (consensus) that is used as an indicator of confidence. We report results of computational tests carried out on a metabolic model of Chinese Hamster Ovary (CHO) cells, which are used for recombinant protein production. Using noisy simulated data, we find that the aggregated ensemble predictions are on average more accurate than the predictions of individual ensemble models. Furthermore, ensemble predictions with high consensus are statistically more accurate than ensemble predictions with large variance. The procedure provides quantitative estimates of the confidence in model predictions and enables the analysis of sufficiently complex networks as required for practical applications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Identified state-space prediction model for aero-optical wavefronts

    Science.gov (United States)

    Faghihi, Azin; Tesch, Jonathan; Gibson, Steve

    2013-07-01

    A state-space disturbance model and associated prediction filter for aero-optical wavefronts are described. The model is computed by system identification from a sequence of wavefronts measured in an airborne laboratory. Estimates of the statistics and flow velocity of the wavefront data are shown and can be computed from the matrices in the state-space model without returning to the original data. Numerical results compare velocity values and power spectra computed from the identified state-space model with those computed from the aero-optical data.

  12. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  13. Model for prediction of strip temperature in hot strip steel mill

    International Nuclear Information System (INIS)

    Panjkovic, Vladimir

    2007-01-01

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good

  14. Model for prediction of strip temperature in hot strip steel mill

    Energy Technology Data Exchange (ETDEWEB)

    Panjkovic, Vladimir [BlueScope Steel, TEOB, 1 Bayview Road, Hastings Vic. 3915 (Australia)]. E-mail: Vladimir.Panjkovic@BlueScopeSteel.com

    2007-10-15

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good.

  15. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  16. Development of Models to Predict the Redox State of Nuclear Waste Containment Glass

    Energy Technology Data Exchange (ETDEWEB)

    Pinet, O.; Guirat, R.; Advocat, T. [Commissariat a l' Energie Atomique (CEA), Departement de Traitement et de Conditionnement des Dechets, Marcoule, BP 71171, 30207 Bagnols-sur-Ceze Cedex (France); Phalippou, J. [Universite de Montpellier II, Laboratoire des Colloides, Verres et Nanomateriaux, 34095 Montpellier Cedex 5 (France)

    2008-07-01

    Vitrification is one of the recommended immobilization routes for nuclear waste, and is currently implemented at industrial scale in several countries, notably for high-level waste. To optimize nuclear waste vitrification, research is conducted to specify suitable glass formulations and develop more effective processes. This research is based not only on experiments at laboratory or technological scale, but also on computer models. Vitrified nuclear waste often contains several multi-valent species whose oxidation state can impact the properties of the melt and of the final glass; these include iron, cerium, ruthenium, manganese, chromium and nickel. Cea is therefore also developing models to predict the final glass redox state. Given the raw materials and production conditions, the model predicts the oxygen fugacity at equilibrium in the melt. It can also estimate the ratios between the oxidation states of the multi-valent species contained in the molten glass. The oxidizing or reductive nature of the atmosphere above the glass melt is also taken into account. Unlike the models used in the conventional glass industry based on empirical methods with a limited range of application, the models proposed are based on the thermodynamic properties of the redox species contained in the waste vitrification feed stream. The thermodynamic data on which the model is based concern the relationship between the glass redox state and the oxygen fugacity in the molten glass. The model predictions were compared with oxygen fugacity measurements for some fifty glasses. The experiments carried out at laboratory and industrial scale with a cold crucible melter. The oxygen fugacity of the glass samples was measured by electrochemical methods and compared with the predicted value. The differences between the predicted and measured oxygen fugacity values were generally less than 0.5 Log unit. (authors)

  17. finite element model for predicting residual stresses in shielded

    African Journals Online (AJOL)

    eobe

    This paper investigates the prediction of residual stresses developed ... steel plates through Finite Element Model simulation and experiments. ... The experimental values as measured by the X-Ray diffractometer were of ... Based on this, it can be concluded that Finite Element .... Comparison of Residual Stresses from X.

  18. Predictive modelling of Fe(III) precipitation in iron removal process for bioleaching circuits.

    Science.gov (United States)

    Nurmi, Pauliina; Ozkaya, Bestamin; Kaksonen, Anna H; Tuovinen, Olli H; Puhakka, Jaakko A

    2010-05-01

    In this study, the applicability of three modelling approaches was determined in an effort to describe complex relationships between process parameters and to predict the performance of an integrated process, which consisted of a fluidized bed bioreactor for Fe(3+) regeneration and a gravity settler for precipitative iron removal. Self-organizing maps were used to visually evaluate the associations between variables prior to the comparison of two different modelling methods, the multiple regression modelling and artificial neural network (ANN) modelling, for predicting Fe(III) precipitation. With the ANN model, an excellent match between the predicted and measured data was obtained (R (2) = 0.97). The best-fitting regression model also gave a good fit (R (2) = 0.87). This study demonstrates that ANNs and regression models are robust tools for predicting iron precipitation in the integrated process and can thus be used in the management of such systems.

  19. Prediction and repeatability of milk coagulation properties and curd-firming modeling parameters of ovine milk using Fourier-transform infrared spectroscopy and Bayesian models.

    Science.gov (United States)

    Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Pazzola, M; Dettori, M L; Vacca, G M; Bittante, G

    2017-05-01

    The aim of this study was to apply Bayesian models to the Fourier-transform infrared spectroscopy spectra of individual sheep milk samples to derive calibration equations to predict traditional and modeled milk coagulation properties (MCP), and to assess the repeatability of MCP measures and their predictions. Data consisted of 1,002 individual milk samples collected from Sarda ewes reared in 22 farms in the region of Sardinia (Italy) for which MCP and modeled curd-firming parameters were available. Two milk samples were taken from 87 ewes and analyzed with the aim of estimating repeatability, whereas a single sample was taken from the other 915 ewes. Therefore, a total of 1,089 analyses were performed. For each sample, 2 spectra in the infrared region 5,011 to 925 cm -1 were available and averaged before data analysis. BayesB models were used to calibrate equations for each of the traits. Prediction accuracy was estimated for each trait and model using 20 replicates of a training-testing validation procedure. The repeatability of MCP measures and their predictions were also compared. The correlations between measured and predicted traits, in the external validation, were always higher than 0.5 (0.88 for rennet coagulation time). We confirmed that the most important element for finding the prediction accuracy is the repeatability of the gold standard analyses used for building calibration equations. Repeatability measures of the predicted traits were generally high (≥95%), even for those traits with moderate analytical repeatability. Our results show that Bayesian models applied to Fourier-transform infrared spectra are powerful tools for cheap and rapid prediction of important traits in ovine milk and, compared with other methods, could help in the interpretation of results. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  1. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  3. Sequence-based prediction of protein-binding sites in DNA: comparative study of two SVM models.

    Science.gov (United States)

    Park, Byungkyu; Im, Jinyong; Tuvshinjargal, Narankhuu; Lee, Wook; Han, Kyungsook

    2014-11-01

    As many structures of protein-DNA complexes have been known in the past years, several computational methods have been developed to predict DNA-binding sites in proteins. However, its inverse problem (i.e., predicting protein-binding sites in DNA) has received much less attention. One of the reasons is that the differences between the interaction propensities of nucleotides are much smaller than those between amino acids. Another reason is that DNA exhibits less diverse sequence patterns than protein. Therefore, predicting protein-binding DNA nucleotides is much harder than predicting DNA-binding amino acids. We computed the interaction propensity (IP) of nucleotide triplets with amino acids using an extensive dataset of protein-DNA complexes, and developed two support vector machine (SVM) models that predict protein-binding nucleotides from sequence data alone. One SVM model predicts protein-binding nucleotides using DNA sequence data alone, and the other SVM model predicts protein-binding nucleotides using both DNA and protein sequences. In a 10-fold cross-validation with 1519 DNA sequences, the SVM model that uses DNA sequence data only predicted protein-binding nucleotides with an accuracy of 67.0%, an F-measure of 67.1%, and a Matthews correlation coefficient (MCC) of 0.340. With an independent dataset of 181 DNAs that were not used in training, it achieved an accuracy of 66.2%, an F-measure 66.3% and a MCC of 0.324. Another SVM model that uses both DNA and protein sequences achieved an accuracy of 69.6%, an F-measure of 69.6%, and a MCC of 0.383 in a 10-fold cross-validation with 1519 DNA sequences and 859 protein sequences. With an independent dataset of 181 DNAs and 143 proteins, it showed an accuracy of 67.3%, an F-measure of 66.5% and a MCC of 0.329. Both in cross-validation and independent testing, the second SVM model that used both DNA and protein sequence data showed better performance than the first model that used DNA sequence data. To the best of

  4. Muon polarization in the MEG experiment: predictions and measurements

    International Nuclear Information System (INIS)

    Baldini, A.M.; Dussoni, S.; Galli, L.; Grassi, M.; Sergiampietri, F.; Signorelli, G.; Bao, Y.; Hildebrandt, M.; Kettle, P.R.; Mtchedlishvili, A.; Papa, A.; Ritt, S.; Baracchini, E.; Bemporad, C.; Cei, F.; D'Onofrio, A.; Nicolo, D.; Tenchini, F.; Berg, F.; Hodge, Z.; Rutar, G.; Biasotti, M.; Gatti, F.; Pizzigoni, G.; Boca, G.; De Bari, A.; Cattaneo, P.W.; Rossella, M.; Cavoto, G.; Piredda, G.; Renga, F.; Voena, C.; Chiarello, G.; Panareo, M.; Pepino, A.; Chiri, C.; Grancagnolo, F.; Tassielli, G.F.; De Gerone, M.; Fujii, Y.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Sawada, R.; Uchiyama, Y.; Yoshida, K.; Graziosi, A.; Ripiccini, E.; Grigoriev, D.N.; Haruyama, T.; Mihara, S.; Nishiguchi, H.; Yamamoto, A.; Ieki, K.; Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V.; Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z.; Khomutov, N.; Korenchenko, A.; Kravchuk, N.; Venturini, M.

    2016-01-01

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process μ + → e + γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P μ = -1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P μ =.0.86 ± 0.02 (stat) -0.06 +0.05 (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ + → e + γ search induced by the muon radiative decay: μ + → e + anti ν μ ν e γ. (orig.)

  5. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India); Gupta, Shikha; Rai, Premanjali [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India)

    2013-10-15

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models was performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive

  6. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model.

    Science.gov (United States)

    Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy

    2007-04-01

    To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.

  7. Predicting sugar consumption: Application of an integrated dual-process, dual-phase model.

    Science.gov (United States)

    Hagger, Martin S; Trost, Nadine; Keech, Jacob J; Chan, Derwin K C; Hamilton, Kyra

    2017-09-01

    Excess consumption of added dietary sugars is related to multiple metabolic problems and adverse health conditions. Identifying the modifiable social cognitive and motivational constructs that predict sugar consumption is important to inform behavioral interventions aimed at reducing sugar intake. We tested the efficacy of an integrated dual-process, dual-phase model derived from multiple theories to predict sugar consumption. Using a prospective design, university students (N = 90) completed initial measures of the reflective (autonomous and controlled motivation, intentions, attitudes, subjective norm, perceived behavioral control), impulsive (implicit attitudes), volitional (action and coping planning), and behavioral (past sugar consumption) components of the proposed model. Self-reported sugar consumption was measured two weeks later. A structural equation model revealed that intentions, implicit attitudes, and, indirectly, autonomous motivation to reduce sugar consumption had small, significant effects on sugar consumption. Attitudes, subjective norm, and, indirectly, autonomous motivation to reduce sugar consumption predicted intentions. There were no effects of the planning constructs. Model effects were independent of the effects of past sugar consumption. The model identified the relative contribution of reflective and impulsive components in predicting sugar consumption. Given the prominent role of the impulsive component, interventions that assist individuals in managing cues-to-action and behavioral monitoring are likely to be effective in regulating sugar consumption. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.; Price, Phillip N. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  9. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  10. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  11. New Models for Predicting Diameter at Breast Height from Stump Dimensions

    Science.gov (United States)

    James A. Westfall

    2010-01-01

    Models to predict dbh from stump dimensions are presented for 18 species groups. Data used to fit the models were collected across thirteen states in the northeastern United States. Primarily because of the presence of multiple measurements from each tree, a mixed-effects modeling approach was used to account for the lack of independence among observations. The...

  12. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  13. Statistical Models for Predicting Threat Detection From Human Behavior

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J.; Bertenthal, Bennett I.

    2018-01-01

    Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure “non-spoof” or insecure “spoof” versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption). Spoof websites had modified Uniform Resource Locator (URL) and authentication level. Participants chose to “login” to or “back” out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level), survey-based (i.e., security knowledge and website familiarity), and real-time measures (i.e., mouse tracking) in predicting risky online behavior during phishing

  14. Statistical Models for Predicting Threat Detection From Human Behavior

    Directory of Open Access Journals (Sweden)

    Timothy Kelley

    2018-04-01

    Full Text Available Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure “non-spoof” or insecure “spoof” versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption. Spoof websites had modified Uniform Resource Locator (URL and authentication level. Participants chose to “login” to or “back” out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level, survey-based (i.e., security knowledge and website familiarity, and real-time measures (i.e., mouse tracking in predicting risky online behavior

  15. Statistical Models for Predicting Threat Detection From Human Behavior.

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J; Bertenthal, Bennett I

    2018-01-01

    Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure "non-spoof" or insecure "spoof" versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption). Spoof websites had modified Uniform Resource Locator (URL) and authentication level. Participants chose to "login" to or "back" out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level), survey-based (i.e., security knowledge and website familiarity), and real-time measures (i.e., mouse tracking) in predicting risky online behavior during phishing attacks

  16. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  17. Modelling hydrodynamic parameters to predict flow assisted corrosion

    International Nuclear Information System (INIS)

    Poulson, B.; Greenwell, B.; Chexal, B.; Horowitz, J.

    1992-01-01

    During the past 15 years, flow assisted corrosion has been a worldwide problem in the power generating industry. The phenomena is complex and depends on environment, material composition, and hydrodynamic factors. Recently, modeling of flow assisted corrosion has become a subject of great importance. A key part of this effort is modeling the hydrodynamic aspects of this issue. This paper examines which hydrodynamic parameter should be used to correlate the occurrence and rate of flow assisted corrosion with physically meaningful parameters, discusses ways of measuring the relevant hydrodynamic parameter, and describes how the hydrodynamic data is incorporated into the predictive model

  18. Modeling a Predictive Energy Equation Specific for Maintenance Hemodialysis.

    Science.gov (United States)

    Byham-Gray, Laura D; Parrott, J Scott; Peters, Emily N; Fogerite, Susan Gould; Hand, Rosa K; Ahrens, Sean; Marcus, Andrea Fleisch; Fiutem, Justin J

    2017-03-01

    Hypermetabolism is theorized in patients diagnosed with chronic kidney disease who are receiving maintenance hemodialysis (MHD). We aimed to distinguish key disease-specific determinants of resting energy expenditure to create a predictive energy equation that more precisely establishes energy needs with the intent of preventing protein-energy wasting. For this 3-year multisite cross-sectional study (N = 116), eligible participants were diagnosed with chronic kidney disease and were receiving MHD for at least 3 months. Predictors for the model included weight, sex, age, C-reactive protein (CRP), glycosylated hemoglobin, and serum creatinine. The outcome variable was measured resting energy expenditure (mREE). Regression modeling was used to generate predictive formulas and Bland-Altman analyses to evaluate accuracy. The majority were male (60.3%), black (81.0%), and non-Hispanic (76.7%), and 23% were ≥65 years old. After screening for multicollinearity, the best predictive model of mREE ( R 2 = 0.67) included weight, age, sex, and CRP. Two alternative models with acceptable predictability ( R 2 = 0.66) were derived with glycosylated hemoglobin or serum creatinine. Based on Bland-Altman analyses, the maintenance hemodialysis equation that included CRP had the best precision, with the highest proportion of participants' predicted energy expenditure classified as accurate (61.2%) and with the lowest number of individuals with underestimation or overestimation. This study confirms disease-specific factors as key determinants of mREE in patients on MHD and provides a preliminary predictive energy equation. Further prospective research is necessary to test the reliability and validity of this equation across diverse populations of patients who are receiving MHD.

  19. Measurement and prediction of thermochemical history effects on sensitization development in austenitic stainless steels

    International Nuclear Information System (INIS)

    Bruemmer, S.M.; Charlot, L.A.

    1985-11-01

    The effects of thermal and thermomechanical treatments on sensitization development in Type 304 and 316 stainless steels have been measured and compared to model predictions. Sensitization development resulting from isothermal, continuous cooling and pipe welding treatments has been evaluated. An empirically modified, theoretically based model is shown to accurately predict material degree of sensitization (DOS) as expressed by the electrochemical potentiokinetic reactivation (EPR) test after both simple and complex treatments. Material DOS is also examined using analytical electron microscopy to document grain boundary chromium depletion and is compared to EPR test results

  20. Predictive modeling of terrestrial radiation exposure from geologic materials

    Science.gov (United States)

    Haber, Daniel A.

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials in an area by creating a model using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low spatial resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas, referred to as background radiation units, homogenous in terms of K, U, and Th are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by our partner National Security Technologies, LLC (NSTec), allowing for the refinement of the technique. High resolution radiation exposure rate models have been developed for two study areas in Southern Nevada that include the alluvium on the western shore of Lake Mohave, and Government Wash north of Lake Mead; both of these areas are arid with little soil moisture and vegetation. We determined that by using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide radiation background units of alluvium, regions of homogeneous geochemistry can be defined allowing for the exposure rate to be predicted. Soil and rock samples have been collected at Government Wash and Lake Mohave as well as a third site near Cameron, Arizona. K, U, and Th concentrations of these samples have been determined using inductively coupled mass spectrometry (ICP-MS) and laboratory counting using radiation detection equipment. In addition, many sample locations also have

  1. Accuracy of some simple models for predicting particulate interception and retention in agricultural systems

    International Nuclear Information System (INIS)

    Pinder, J.E. III; McLeod, K.W.; Adriano, D.C.

    1989-01-01

    The accuracy of three radionuclide transfer models for predicting the interception and retention of airborne particles by agricultural crops was tested using Pu-bearing aerosols released to the atmosphere from nuclear fuel facilities on the U.S. Department of Energy's Savannah River Plant, near Aiken, SC. The models evaluated were: (1) NRC, the model defined in U.S. Nuclear Regulatory Guide 1.109; (2) FOOD, a model similar to the NRC model that also predicts concentrations in grains; and (3) AGNS, a model developed from the NRC model for the southeastern United States. Plutonium concentrations in vegetation and grain were predicted from measured deposition rates and compared to concentrations observed in the field. Crops included wheat, soybeans, corn and cabbage. Although predictions of the three models differed by less than a factor of 4, they showed different abilities to predict concentrations observed in the field. The NRC and FOOD models consistently underpredicted the observed Pu concentrations for vegetation. The AGNS model was a more accurate predictor of Pu concentrations for vegetation. Both the FOOD and AGNS models accurately predicted the Pu concentrations for grains

  2. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  3. Validations and improvements of airfoil trailing-edge noise prediction models using detailed experimental data

    DEFF Research Database (Denmark)

    Kamruzzaman, M.; Lutz, Th.; Würz, W.

    2012-01-01

    This paper describes an extensive assessment and a step by step validation of different turbulent boundary-layer trailing-edge noise prediction schemes developed within the European Union funded wind energy project UpWind. To validate prediction models, measurements of turbulent boundary-layer pr...... with measurements in the frequency region higher than 1 kHz, whereas they over-predict the sound pressure level in the low-frequency region. Copyright © 2011 John Wiley & Sons, Ltd.......-layer properties such as two-point turbulent velocity correlations, the spectra of the associated wall pressure fluctuations and the emitted trailing-edge far-field noise were performed in the laminar wind tunnel of the Institute of Aerodynamics and Gas Dynamics, University of Stuttgart. The measurements were...... carried out for a NACA 643-418 airfoil, at Re  =  2.5 ×106, angle of attack of −6° to 6°. Numerical results of different prediction schemes are extensively validated and discussed elaborately. The investigations on the TNO-Blake noise prediction model show that the numerical wall pressure fluctuation...

  4. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  5. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  6. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  7. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  8. Presurgery resting-state local graph-theory measures predict neurocognitive outcomes after brain surgery in temporal lobe epilepsy.

    Science.gov (United States)

    Doucet, Gaelle E; Rider, Robert; Taylor, Nathan; Skidmore, Christopher; Sharan, Ashwini; Sperling, Michael; Tracy, Joseph I

    2015-04-01

    This study determined the ability of resting-state functional connectivity (rsFC) graph-theory measures to predict neurocognitive status postsurgery in patients with temporal lobe epilepsy (TLE) who underwent anterior temporal lobectomy (ATL). A presurgical resting-state functional magnetic resonance imaging (fMRI) condition was collected in 16 left and 16 right TLE patients who underwent ATL. In addition, patients received neuropsychological testing pre- and postsurgery in verbal and nonverbal episodic memory, language, working memory, and attention domains. Regarding the functional data, we investigated three graph-theory properties (local efficiency, distance, and participation), measuring segregation, integration and centrality, respectively. These measures were only computed in regions of functional relevance to the ictal pathology, or the cognitive domain. Linear regression analyses were computed to predict the change in each neurocognitive domain. Our analyses revealed that cognitive outcome was successfully predicted with at least 68% of the variance explained in each model, for both TLE groups. The only model not significantly predictive involved nonverbal episodic memory outcome in right TLE. Measures involving the healthy hippocampus were the most common among the predictors, suggesting that enhanced integration of this structure with the rest of the brain may improve cognitive outcomes. Regardless of TLE group, left inferior frontal regions were the best predictors of language outcome. Working memory outcome was predicted mostly by right-sided regions, in both groups. Overall, the results indicated our integration measure was the most predictive of neurocognitive outcome. In contrast, our segregation measure was the least predictive. This study provides evidence that presurgery rsFC measures may help determine neurocognitive outcomes following ATL. The results have implications for refining our understanding of compensatory reorganization and predicting

  9. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  10. Prediction of fermentation index of cocoa beans (Theobroma cacao L.) based on color measurement and artificial neural networks.

    Science.gov (United States)

    León-Roque, Noemí; Abderrahim, Mohamed; Nuñez-Alejos, Luis; Arribas, Silvia M; Condezo-Hoyos, Luis

    2016-12-01

    Several procedures are currently used to assess fermentation index (FI) of cocoa beans (Theobroma cacao L.) for quality control. However, all of them present several drawbacks. The aim of the present work was to develop and validate a simple image based quantitative procedure, using color measurement and artificial neural network (ANNs). ANN models based on color measurements were tested to predict fermentation index (FI) of fermented cocoa beans. The RGB values were measured from surface and center region of fermented beans in images obtained by camera and desktop scanner. The FI was defined as the ratio of total free amino acids in fermented versus non-fermented samples. The ANN model that included RGB color measurement of fermented cocoa surface and R/G ratio in cocoa bean of alkaline extracts was able to predict FI with no statistical difference compared with the experimental values. Performance of the ANN model was evaluated by the coefficient of determination, Bland-Altman plot and Passing-Bablok regression analyses. Moreover, in fermented beans, total sugar content and titratable acidity showed a similar pattern to the total free amino acid predicted through the color based ANN model. The results of the present work demonstrate that the proposed ANN model can be adopted as a low-cost and in situ procedure to predict FI in fermented cocoa beans through apps developed for mobile device. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  12. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  13. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  14. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  15. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  16. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  17. Transport assessment - arid: measurement and prediction of water movement below the root zone

    International Nuclear Information System (INIS)

    Gee, G.W.; Kirkham, R.R.

    1984-01-01

    The amount of water transported below the root-zone and available for drainage (recharge) must be known in order to quantify the potential for leaching at low-level waste sites. Under arid site conditions, we quantified drainage by using weighing lysimeters containing sandy soil and measured 6 and 11 cm of drainage for a 1-yr period (June 1983-May 1984) from grass-covered and bare-soil surfaces, respectively. Precipitation during this period at our test site near Richland, Washington, was 25 cm. Similar drainage values were estimated from neutron probe measurements of water content profile changes in an adjacent grass-covered site. These data suggest that significant amounts of drainage can occur at arid sites when soils are coarse textured and precipitation occurs during fall and winter months. Model simulations predicted drainage values comparable to those measured with our weighing lysimeters. Long-term, 500- to 1000-yr predictions of leaching are possible with our model simulations. However, additional studies are needed to evaluate the effect of soil variability and stochastic rainfall inputs on drainage estimates, particularly for arid sites

  18. Predictive multiscale computational model of shoe-floor coefficient of friction.

    Science.gov (United States)

    Moghaddam, Seyed Reza M; Acharya, Arjun; Redfern, Mark S; Beschorner, Kurt E

    2018-01-03

    Understanding the frictional interactions between the shoe and floor during walking is critical to prevention of slips and falls, particularly when contaminants are present. A multiscale finite element model of shoe-floor-contaminant friction was developed that takes into account the surface and material characteristics of the shoe and flooring in microscopic and macroscopic scales. The model calculates shoe-floor coefficient of friction (COF) in boundary lubrication regime where effects of adhesion friction and hydrodynamic pressures are negligible. The validity of model outputs was assessed by comparing model predictions to the experimental results from mechanical COF testing. The multiscale model estimates were linearly related to the experimental results (p < 0.0001). The model predicted 73% of variability in experimentally-measured shoe-floor-contaminant COF. The results demonstrate the potential of multiscale finite element modeling in aiding slip-resistant shoe and flooring design and reducing slip and fall injuries. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  20. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  1. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences

    Energy Technology Data Exchange (ETDEWEB)

    Kendall, G.M. [University of Oxford, Cancer Epidemiology Unit, Oxford (United Kingdom); Wakeford, R. [University of Manchester, Centre for Occupational and Environmental Health, Institute of Population Health, Manchester (United Kingdom); Athanson, M. [University of Oxford, Bodleian Library, Oxford (United Kingdom); Vincent, T.J. [University of Oxford, Childhood Cancer Research Group, Oxford (United Kingdom); Carter, E.J. [University of Worcester, Earth Heritage Trust, Geological Records Centre, Henwick Grove, Worcester (United Kingdom); McColl, N.P. [Public Health England, Centre for Radiation, Chemical and Environmental Hazards, Chilton, Didcot, Oxon (United Kingdom); Little, M.P. [National Cancer Institute, DHHS, NIH, Radiation Epidemiology Branch, Division of Cancer Epidemiology and Genetics, Bethesda, MD (United States)

    2016-03-15

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matern correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matern model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matern model. (orig.)

  2. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences

    International Nuclear Information System (INIS)

    Kendall, G.M.; Wakeford, R.; Athanson, M.; Vincent, T.J.; Carter, E.J.; McColl, N.P.; Little, M.P.

    2016-01-01

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matern correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matern model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matern model. (orig.)

  3. Comparison of Finite Element Predictions to Measurements from the Sandia Microslip Experiment

    Energy Technology Data Exchange (ETDEWEB)

    LOBITZ,DONALD W.; GREGORY,DANNY LYNN; SMALLWOOD,DAVID O.

    2000-11-09

    When embarking on an experimental program for purposes of discovery and understanding, it is only prudent to use appropriate analysis tools to aid in the discovery process. Due to the limited scope of experimental measurement analytical results can significantly complement the data after a reasonable validation process has occurred. In this manner the analytical results can help to explain certain measurements, suggest other measurements to take and point to possible modifications to the experimental apparatus. For these reasons it was decided to create a detailed nonlinear finite element model of the Sandia Microslip Experiment. This experiment was designed to investigate energy dissipation due to microslip in bolted joints and to identify the critical parameters involved. In an attempt to limit the microslip to a single interface a complicated system of rollers and cables was devised to clamp the two slipping members together with a prescribed normal load without using a bolt. An oscillatory tangential load is supplied via a shaker. The finite element model includes the clamping device in addition to the sequence of steps taken in setting up the experiment. The interface is modeled using Coulomb friction requiring a modest validation procedure for estimating the coefficient of friction. Analysis results have indicated misalignment problems in the experimental procedure, identified transducer locations for more accurate measurements, predicted complex interface motions including the potential for galling, identified regions where microslip occurs and during which parts of the loading cycle it occurs, all this in addition to the energy dissipated per cycle. A number of these predictions have been experimentally corroborated in varying degrees and are presented in the paper along with the details of the finite element model.

  4. Developing and implementing the use of predictive models for estimating water quality at Great Lakes beaches

    Science.gov (United States)

    Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.

    2013-01-01

    Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches

  5. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  6. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  7. Measurement and modeling of advanced coal conversion processes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G.; Smoot, L.D.; Brewster, B.S. (Advanced Fuel Research, Inc., East Hartford, CT (United States) Brigham Young Univ., Provo, UT (United States))

    1991-01-01

    The overall objective of this program is the development of predictive capability for the design, scale up, simulation, control and feedstock evaluation in advanced coal conversion devices. This program will merge significant advances made in measuring and quantitatively describing the mechanisms in coal conversion behavior. Comprehensive computer codes for mechanistic modeling of entrained-bed gasification. Additional capabilities in predicting pollutant formation will be implemented and the technology will be expanded to fixed-bed reactors.

  8. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  9. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    Science.gov (United States)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  10. Interpretable Predictive Models for Knowledge Discovery from Home-Care Electronic Health Records

    Directory of Open Access Journals (Sweden)

    Bonnie L. Westra

    2011-01-01

    Full Text Available The purpose of this methodological study was to compare methods of developing predictive rules that are parsimonious and clinically interpretable from electronic health record (EHR home visit data, contrasting logistic regression with three data mining classification models. We address three problems commonly encountered in EHRs: the value of including clinically important variables with little variance, handling imbalanced datasets, and ease of interpretation of the resulting predictive models. Logistic regression and three classification models using Ripper, decision trees, and Support Vector Machines were applied to a case study for one outcome of improvement in oral medication management. Predictive rules for logistic regression, Ripper, and decision trees are reported and results compared using F-measures for data mining models and area under the receiver-operating characteristic curve for all models. The rules generated by the three classification models provide potentially novel insights into mining EHRs beyond those provided by standard logistic regression, and suggest steps for further study.

  11. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  12. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-Rammler particle size distribution (PSD) function. The Ks and PSD data of 14 golf course sands from literature as well as newly measured data for a size fraction of Lunar Regolith Simulant, packed at three different dry bulk densities, were used for model evaluation. The pore network tortuosity......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  13. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  14. Muon polarization in the MEG experiment: predictions and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Dussoni, S.; Galli, L.; Grassi, M.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Hildebrandt, M.; Kettle, P.R.; Mtchedlishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [University of Tokyo, ICEPP, Tokyo (Japan); INFN, Laboratori Nazionali di Frascati, Rome (Italy); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [INFN Sezione di Pisa, Pisa (Italy); Pisa Univ., Dipartimento di Fisica, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genova (Italy); Genova Univ., Dipartimento di Fisica, Genova (Italy); Boca, G.; De Bari, A. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica, Pavia (Italy); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Renga, F.; Voena, C. [Univ. ' ' Sapienza' ' , Rome (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Univ. del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); Chiri, C.; Grancagnolo, F.; Tassielli, G.F. [Univ. del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); De Gerone, M. [Genova Univ. (Italy); INFN Sezione di Genova, Genova (Italy); Fujii, Y.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [University of Tokyo, ICEPP, Tokyo (Japan); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Ieki, K. [Paul Scherrer Institut PSI, Villigen (Switzerland); University of Tokyo, ICEPP, Tokyo (Japan); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z. [University of California, Irvine, CA (United States); Khomutov, N.; Korenchenko, A.; Kravchuk, N. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Venturini, M. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Scuola Normale Superiore, Pisa (Italy); Collaboration: The MEG Collaboration

    2016-04-15

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process μ{sup +} → e{sup +}γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P{sub μ} = -1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P{sub μ} =.0.86 ± 0.02 (stat){sub -0.06}{sup +0.05} (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ{sup +} → e{sup +}γ search induced by the muon radiative decay: μ{sup +} → e{sup +} anti ν{sub μ}ν{sub e}γ. (orig.)

  15. Prediction of microsegregation and pitting corrosion resistance of austenitic stainless steel welds by modelling

    Energy Technology Data Exchange (ETDEWEB)

    Vilpas, M. [VTT Manufacturing Technology, Espoo (Finland). Materials and Structural Integrity

    1999-07-01

    The present study focuses on the ability of several computer models to accurately predict the solidification, microsegregation and pitting corrosion resistance of austenitic stainless steel weld metals. Emphasis was given to modelling the effect of welding speed on solute redistribution and ultimately to the prediction of weld pitting corrosion resistance. Calculations were experimentally verified by applying autogenous GTA- and laser processes over the welding speed range of 0.1 to 5 m/min for several austenitic stainless steel grades. Analytical and computer aided models were applied and linked together for modelling the solidification behaviour of welds. The combined use of macroscopic and microscopic modelling is a unique feature of this work. This procedure made it possible to demonstrate the effect of weld pool shape and the resulting solidification parameters on microsegregation and pitting corrosion resistance. Microscopic models were also used separately to study the role of welding speed and solidification mode in the development of microsegregation and pitting corrosion resistance. These investigations demonstrate that the macroscopic model can be implemented to predict solidification parameters that agree well with experimentally measured values. The linked macro-micro modelling was also able to accurately predict segregation profiles and CPT-temperatures obtained from experiments. The macro-micro simulations clearly showed the major roles of weld composition and welding speed in determining segregation and pitting corrosion resistance while the effect of weld shape variations remained negligible. The microscopic dendrite tip and interdendritic models were applied to welds with good agreement with measured segregation profiles. Simulations predicted that weld inhomogeneity can be substantially decreased with increasing welding speed resulting in a corresponding improvement in the weld pitting corrosion resistance. In the case of primary austenitic

  16. Do implicit measures of attitudes incrementally predict snacking behaviour over explicit affect-related measures?

    Science.gov (United States)

    Ayres, Karen; Conner, Mark T; Prestwich, Andrew; Smith, Paul

    2012-06-01

    Various studies have demonstrated an association between implicit measures of attitudes and dietary-related behaviours. However, no study has tested whether implicit measures of attitudes predict dietary behaviour after controlling for explicit measures of palatability. In a prospective design, two studies assessed the validity of measures of implicit attitude (Implicit Association Test, IAT) and explicit measures of palatability and health-related attitudes on self-reported (Studies 1 and 2) and objective food (fruit vs. chocolate) choice (Study 2). Following regression analyses, in both studies, implicit measures of attitudes were correlated with food choice but failed to significantly predict food choice when controlling specifically for explicit measures of palatability. These consistent relationships emerged despite using different category labels within the IAT in the two studies. The current research suggests implicit measures of attitudes may not predict dietary behaviours after taking into account the palatability of food. This is important in order to establish determinants that explain unique variance in dietary behaviours and to inform dietary change interventions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. From Process Modeling to Elastic Property Prediction for Long-Fiber Injection-Molded Thermoplastics

    International Nuclear Information System (INIS)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Frame, Barbara J.; Phelps, Jay; Tucker III, Charles L.; Bapanapalli, Satish K.; Holbery, James D.; Smith, Mark T.

    2007-01-01

    This paper presents an experimental-modeling approach to predict the elastic properties of long-fiber injection-molded thermoplastics (LFTs). The approach accounts for fiber length and orientation distributions in LFTs. LFT samples were injection-molded for the study, and fiber length and orientation distributions were measured at different locations for use in the computation of the composite properties. The current fiber orientation model was assessed to determine its capability to predict fiber orientation in LFTs. Predicted fiber orientations for the studied LFT samples were also used in the calculation of the elastic properties of these samples, and the predicted overall moduli were then compared with the experimental results. The elastic property prediction was based on the Eshelby-Mori-Tanaka method combined with the orientation averaging technique. The predictions reasonably agree with the experimental LFT data

  18. Advancing viral RNA structure prediction: measuring the thermodynamics of pyrimidine-rich internal loops.

    Science.gov (United States)

    Phan, Andy; Mailey, Katherine; Saeki, Jessica; Gu, Xiaobo; Schroeder, Susan J

    2017-05-01

    Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased toward small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with >6 nucleotides (nt) that occur frequently in viral RNA. This article presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Uracil and protonated cytosine base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with >6 nt are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2 × 2 have been measured. These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites. © 2017 Phan et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  19. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  20. Evaluation of the DayCent model to predict carbon fluxes in French crop sites

    Science.gov (United States)

    Fujisaki, Kenji; Martin, Manuel P.; Zhang, Yao; Bernoux, Martial; Chapuis-Lardy, Lydie

    2017-04-01

    Croplands in temperate regions are an important component of the carbon balance and can act as a sink or a source of carbon, depending on pedoclimatic conditions and management practices. Therefore the evaluation of carbon fluxes in croplands by modelling approach is relevant in the context of global change. This study was part of the Comete-Global project funded by the multi-Partner call FACCE JPI. Carbon fluxes, net ecosystem exchange (NEE), leaf area index (LAI), biomass, and grain production were simulated at the site level in three French crop experiments from the CarboEurope project. Several crops were studied, like winter wheat, rapeseed, barley, maize, and sunflower. Daily NEE was measured with eddy covariance and could be partitioned between gross primary production (GPP) and total ecosystem respiration (TER). Measurements were compared to DayCent simulations, a process-based model predicting plant production and soil organic matter turnover at daily time step. We compared two versions of the model: the original one with a simplified plant module and a newer version that simulates LAI. Input data for modelling were soil properties, climate, and management practices. Simulations of grain yields and biomass production were acceptable when using optimized crop parameters. Simulation of NEE was also acceptable. GPP predictions were improved with the newer version of the model, eliminating temporal shifts that could be observed with the original model. TER was underestimated by the model. Predicted NEE was more sensitive to soil tillage and nitrogen applications than measured NEE. DayCent was therefore a relevant tool to predict carbon fluxes in French crops at the site level. The introduction of LAI in the model improved its performance.

  1. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  2. A predictive model for diagnosing stroke-related apraxia of speech.

    Science.gov (United States)

    Ballard, Kirrie J; Azizi, Lamiae; Duffy, Joseph R; McNeil, Malcolm R; Halaki, Mark; O'Dwyer, Nicholas; Layfield, Claire; Scholl, Dominique I; Vogel, Adam P; Robin, Donald A

    2016-01-29

    Diagnosis of the speech motor planning/programming disorder, apraxia of speech (AOS), has proven challenging, largely due to its common co-occurrence with the language-based impairment of aphasia. Currently, diagnosis is based on perceptually identifying and rating the severity of several speech features. It is not known whether all, or a subset of the features, are required for a positive diagnosis. The purpose of this study was to assess predictor variables for the presence of AOS after left-hemisphere stroke, with the goal of increasing diagnostic objectivity and efficiency. This population-based case-control study involved a sample of 72 cases, using the outcome measure of expert judgment on presence of AOS and including a large number of independently collected candidate predictors representing behavioral measures of linguistic, cognitive, nonspeech oral motor, and speech motor ability. We constructed a predictive model using multiple imputation to deal with missing data; the Least Absolute Shrinkage and Selection Operator (Lasso) technique for variable selection to define the most relevant predictors, and bootstrapping to check the model stability and quantify the optimism of the developed model. Two measures were sufficient to distinguish between participants with AOS plus aphasia and those with aphasia alone, (1) a measure of speech errors with words of increasing length and (2) a measure of relative vowel duration in three-syllable words with weak-strong stress pattern (e.g., banana, potato). The model has high discriminative ability to distinguish between cases with and without AOS (c-index=0.93) and good agreement between observed and predicted probabilities (calibration slope=0.94). Some caution is warranted, given the relatively small sample specific to left-hemisphere stroke, and the limitations of imputing missing data. These two speech measures are straightforward to collect and analyse, facilitating use in research and clinical settings. Copyright

  3. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  4. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  5. Design of stabilizing output feedback nonlinear model predictive controllers with an application to DC-DC converters

    NARCIS (Netherlands)

    Roset, B.J.P.; Lazar, M.; Heemels, W.P.M.H.; Nijmeijer, H.

    2007-01-01

    Abstract—This paper focuses on the synthesis of nonlinear Model Predictive Controllers that can guarantee robustness with respect to measurement noise. The input-to-state stability framework is employed to analyze the robustness of the resulting Model Predictive Control (MPC) closed-loop system. It

  6. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  7. Petrophysical properties of greensand as predicted from NMR measurements

    DEFF Research Database (Denmark)

    Hossain, Zakir; Grattoni, Carlos A.; Solymar, Mikael

    2011-01-01

    ABSTRACT: Nuclear magnetic resonance (NMR) is a useful tool in reservoir evaluation. The objective of this study is to predict petrophysical properties from NMR T2 distributions. A series of laboratory experiments including core analysis, capillary pressure measurements, NMR T2 measurements...... with macro-pores. Permeability may be predicted from NMR by using Kozeny's equation when surface relaxivity is known. Capillary pressure drainage curves may be predicted from NMR T2 distribution when pore size distribution within a sample is homogeneous....

  8. Predicting in-patient falls in a geriatric clinic: a clinical study combining assessment data and simple sensory gait measurements.

    Science.gov (United States)

    Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R

    2009-08-01

    Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis

  9. Measurement and ANN prediction of pH-dependent solubility of nitrogen-heterocyclic compounds.

    Science.gov (United States)

    Sun, Feifei; Yu, Qingni; Zhu, Jingke; Lei, Lecheng; Li, Zhongjian; Zhang, Xingwang

    2015-09-01

    Based on the solubility of 25 nitrogen-heterocyclic compounds (NHCs) measured by saturation shake-flask method, artificial neural network (ANN) was employed to the study of the quantitative relationship between the structure and pH-dependent solubility of NHCs. With genetic algorithm-multivariate linear regression (GA-MLR) approach, five out of the 1497 molecular descriptors computed by Dragon software were selected to describe the molecular structures of NHCs. Using the five selected molecular descriptors as well as pH and the partial charge on the nitrogen atom of NHCs (QN) as inputs of ANN, a quantitative structure-property relationship (QSPR) model without using Henderson-Hasselbalch (HH) equation was successfully developed to predict the aqueous solubility of NHCs in different pH water solutions. The prediction model performed well on the 25 model NHCs with an absolute average relative deviation (AARD) of 5.9%, while HH approach gave an AARD of 36.9% for the same model NHCs. It was found that QN played a very important role in the description of NHCs and, with QN, ANN became a potential tool for the prediction of pH-dependent solubility of NHCs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Prediction of multi-wake problems using an improved Jensen wake model

    DEFF Research Database (Denmark)

    Tian, Linlin; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The improved analytical wake model named as 2D_k Jensen model (which was proposed to overcome some shortcomes in the classical Jensen wake model) is applied and validated in this work for wind turbine multi-wake predictions. Different from the original Jensen model, this newly developed 2D_k Jensen...... model uses a cosine shape instead of the top-hat shape for the velocity deficit in the wake, and the wake decay rate as a variable that is related to the ambient turbulence as well as the rotor generated turbulence. Coupled with four different multi-wake combination models, the 2D_k Jensen model...... is assessed through (1) simulating two wakes interaction under full wake and partial wake conditions and (2) predicting the power production in the Horns Rev wind farm for different wake sectors around two different wind directions. Through comparisons with field measurements, results from Large Eddy...

  11. Modelling the behaviour of long-lived radionuclides in the Irish Sea - comparison of model predictions with field observations

    International Nuclear Information System (INIS)

    Kershaw, P.J.; Pentreath, R.J.; Gurbutt, P.A.; Woodhead, D.S.; Durance, J.A.; Camplin, W.C.

    1988-01-01

    A multi-compartmental box model of the Irish Sea has been developed to predict the distribution and radiological consequences of radionuclides discharged from the Sellafield reprocessing plant. The box structure was based on observations of radionuclide distributions in the sea bed and the water circulation was generated from extensive time-series data on 137 Cs concentrations in seawater. Measurements of naturally-occurring nuclides provided both data on the extent and rate of these processes and a means to validate the model assumptions. The model structure is briefly outlined, comparisons are made between model predictions and field observation, and some of the difficulties in making such comparisons are discussed. (author)

  12. SU-E-T-479: Development and Validation of Analytical Models Predicting Secondary Neutron Radiation in Proton Therapy Applications

    International Nuclear Information System (INIS)

    Farah, J; Bonfrate, A; Donadille, L; Martinetti, F; Trompier, F; Clairand, I; De Olivera, A; Delacroix, S; Herault, J; Piau, S; Vabre, I

    2014-01-01

    Purpose: Test and validation of analytical models predicting leakage neutron exposure in passively scattered proton therapy. Methods: Taking inspiration from the literature, this work attempts to build an analytical model predicting neutron ambient dose equivalents, H*(10), within the local 75 MeV ocular proton therapy facility. MC simulations were first used to model H*(10) in the beam axis plane while considering a closed final collimator and pristine Bragg peak delivery. Next, MC-based analytical model was tested against simulation results and experimental measurements. The model was also expended in the vertical direction to enable a full 3D mapping of H*(10) inside the treatment room. Finally, the work focused on upgrading the literature model to clinically relevant configurations considering modulated beams, open collimators, patient-induced neutron fluctuations, etc. Results: The MC-based analytical model efficiently reproduced simulated H*(10) values with a maximum difference below 10%. In addition, it succeeded in predicting measured H*(10) values with differences <40%. The highest differences were registered at the closest and farthest positions from isocenter where the analytical model failed to faithfully reproduce the high neutron fluence and energy variations. The differences remains however acceptable taking into account the high measurement/simulation uncertainties and the end use of this model, i.e. radiation protection. Moreover, the model was successfully (differences < 20% on simulations and < 45% on measurements) extended to predict neutrons in the vertical direction with respect to the beam line as patients are in the upright seated position during ocular treatments. Accounting for the impact of beam modulation, collimation and the present of a patient in the beam path is far more challenging and conversion coefficients are currently being defined to predict stray neutrons in clinically representative treatment configurations. Conclusion

  13. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    International Nuclear Information System (INIS)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-01-01

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional

  14. Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.

    Science.gov (United States)

    Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A

    2018-04-01

    A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.

  15. Measurement and Model Validation of Nanofluid Specific Heat Capacity with Differential Scanning Calorimetry

    Directory of Open Access Journals (Sweden)

    Harry O'Hanley

    2012-01-01

    Full Text Available Nanofluids are being considered for heat transfer applications; therefore it is important to know their thermophysical properties accurately. In this paper we focused on nanofluid specific heat capacity. Currently, there exist two models to predict a nanofluid specific heat capacity as a function of nanoparticle concentration and material. Model I is a straight volume-weighted average; Model II is based on the assumption of thermal equilibrium between the particles and the surrounding fluid. These two models give significantly different predictions for a given system. Using differential scanning calorimetry (DSC, a robust experimental methodology for measuring the heat capacity of fluids, the specific heat capacities of water-based silica, alumina, and copper oxide nanofluids were measured. Nanoparticle concentrations were varied between 5 wt% and 50 wt%. Test results were found to be in excellent agreement with Model II, while the predictions of Model I deviated very significantly from the data. Therefore, Model II is recommended for nanofluids.

  16. Robust human body model injury prediction in simulated side impact crashes.

    Science.gov (United States)

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  17. Straw combustion on slow-moving grates - a comparison of model predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Kaer, S.K. [Aalborg Univ. (Denmark). Inst. of Energy Technology

    2005-03-01

    Combustion of straw in grate-based boilers is often associated with high emission levels and relatively poor fuel burnout. A numerical grate combustion model was developed to assist in improving the combustion performance of these boilers. The model is based on a one-dimensional ''walking-column'' approach and includes the energy equations for both the fuel and the gas accounting for heat transfer between the two phases. The model gives important insight into the combustion process and provides inlet conditions for a computational fluid dynamics analysis of the freeboard. The model predictions indicate the existence of two distinct combustion modes. Combustion air temperature and mass flow-rate are the two parameters determining the mode. There is a significant difference in reaction rates (ignition velocity) and temperature levels between the two modes. Model predictions were compared to measurements in terms of ignition velocity and temperatures for five different combinations of air mass flow and temperature. In general, the degree of correspondence with the experimental data is favorable. The largest difference between measurements and predictions occurs when the combustion mode changes. The applicability to full-scale is demonstrated by predictions made for an existing straw-fired boiler located in Denmark. (author)

  18. Micromechanics-based damage model for failure prediction in cold forming

    Energy Technology Data Exchange (ETDEWEB)

    Lu, X.Z.; Chan, L.C., E-mail: lc.chan@polyu.edu.hk

    2017-04-06

    The purpose of this study was to develop a micromechanics-based damage (micro-damage) model that was concerned with the evolution of micro-voids for failure prediction in cold forming. Typical stainless steel SS316L was selected as the specimen material, and the nonlinear isotropic hardening rule was extended to describe the large deformation of the specimen undergoing cold forming. A micro-focus high-resolution X-ray computed tomography (CT) system was employed to trace and measure the micro-voids inside the specimen directly. Three-dimensional (3D) representative volume element (RVE) models with different sizes and spatial locations were reconstructed from the processed CT images of the specimen, and the average size and volume fraction of micro-voids (VFMV) for the specimen were determined via statistical analysis. Subsequently, the micro-damage model was compiled as a user-defined material subroutine into the finite element (FE) package ABAQUS. The stress-strain responses and damage evolutions of SS316L specimens under tensile and compressive deformations at different strain rates were predicted and further verified experimentally. It was concluded that the proposed micro-damage model is convincing for failure prediction in cold forming of the SS316L material.

  19. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  20. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    Science.gov (United States)

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.

    Science.gov (United States)

    Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L

    2016-01-01

    The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.

  2. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  3. A multiple model approach to respiratory motion prediction for real-time IGRT

    International Nuclear Information System (INIS)

    Putra, Devi; Haas, Olivier C L; Burnham, Keith J; Mills, John A

    2008-01-01

    Respiration induces significant movement of tumours in the vicinity of thoracic and abdominal structures. Real-time image-guided radiotherapy (IGRT) aims to adapt radiation delivery to tumour motion during irradiation. One of the main problems for achieving this objective is the presence of time lag between the acquisition of tumour position and the radiation delivery. Such time lag causes significant beam positioning errors and affects the dose coverage. A method to solve this problem is to employ an algorithm that is able to predict future tumour positions from available tumour position measurements. This paper presents a multiple model approach to respiratory-induced tumour motion prediction using the interacting multiple model (IMM) filter. A combination of two models, constant velocity (CV) and constant acceleration (CA), is used to capture respiratory-induced tumour motion. A Kalman filter is designed for each of the local models and the IMM filter is applied to combine the predictions of these Kalman filters for obtaining the predicted tumour position. The IMM filter, likewise the Kalman filter, is a recursive algorithm that is suitable for real-time applications. In addition, this paper proposes a confidence interval (CI) criterion to evaluate the performance of tumour motion prediction algorithms for IGRT. The proposed CI criterion provides a relevant measure for the prediction performance in terms of clinical applications and can be used to specify the margin to accommodate prediction errors. The prediction performance of the IMM filter has been evaluated using 110 traces of 4-minute free-breathing motion collected from 24 lung-cancer patients. The simulation study was carried out for prediction time 0.1-0.6 s with sampling rates 3, 5 and 10 Hz. It was found that the prediction of the IMM filter was consistently better than the prediction of the Kalman filter with the CV or CA model. There was no significant difference of prediction errors for the

  4. Assessment of Aircrew Radiation Exposure by further measurements and model development

    International Nuclear Information System (INIS)

    Lewis, B. J.; Desormeaux, M.; Green, A. R.; Bennett, L. G. I.; Butler, A.; McCall, M.; Saez Vergara, J. C.

    2004-01-01

    A methodology is presented for collecting and analysing exposure measurements from galactic cosmic radiation using a portable equipment suite and encapsulating these data into a semi-empirical model/Predictive Code for Aircrew Radiation Exposure (PCAIRE) for the assessment of aircrew radiation exposure on any flight over the solar cycle. The PCAIRE code has been validated against integral route dose measurements at commercial aircraft altitudes during experimental flights made by various research groups over the past 5 y with code predictions typically within ±20% of the measured data. An empirical correlation, based on ground-level neutron monitoring data, is detailed further for estimation of aircrew exposure from solar particle events. The semi-empirical models have been applied to predict the annual and career exposure of a flight crew member using actual flight roster data, accounting for contributions from galactic radiation and several solar energetic-particle events over the period 1973-2002. (authors)

  5. Transport assessment - arid: measurement and prediction of water movement below the root zone

    International Nuclear Information System (INIS)

    Gee, G.W.; Kirkham, R.R.

    1984-09-01

    The amount of water transported below the root-zone and available for drainage (recharge) must be known in order to quantify the potential for leaching at low-level waste sites. Under arid site conditions, we quantified drainage by using weighing lysimeters containing sandy soil and measured 6 and 11 cm of drainage for a 1-yr period (June 1983-May 1984) from grass-covered and bare-soil surfaces, respectively. Precipitation during this period at our test site near Richland, Washington, was 25 cm. Similar drainage values were estimated from neutron probe measurements of water content profile changes in an adjacent grass-covered site. These data suggest that significant amounts of drainage can occur at arid sites when soils are coarse textured and precipitation occurs during fall and winter months. Model simulations predicted drainage values comparable to those measured with our weighing lysimeters. Long-term, 500- to 1000-yr predictions of leaching are possible with our model simulations. However, additional studies are needed to evaluate the effect of soil variability and stochastic rainfall inputs on drainage estimates, particularly for arid sites. 15 references, 9 figures, 1 table

  6. Recent development of risk-prediction models for incident hypertension: An updated systematic review.

    Directory of Open Access Journals (Sweden)

    Dongdong Sun

    Full Text Available Hypertension is a leading global health threat and a major cardiovascular disease. Since clinical interventions are effective in delaying the disease progression from prehypertension to hypertension, diagnostic prediction models to identify patient populations at high risk for hypertension are imperative.Both PubMed and Embase databases were searched for eligible reports of either prediction models or risk scores of hypertension. The study data were collected, including risk factors, statistic methods, characteristics of study design and participants, performance measurement, etc.From the searched literature, 26 studies reporting 48 prediction models were selected. Among them, 20 reports studied the established models using traditional risk factors, such as body mass index (BMI, age, smoking, blood pressure (BP level, parental history of hypertension, and biochemical factors, whereas 6 reports used genetic risk score (GRS as the prediction factor. AUC ranged from 0.64 to 0.97, and C-statistic ranged from 60% to 90%.The traditional models are still the predominant risk prediction models for hypertension, but recently, more models have begun to incorporate genetic factors as part of their model predictors. However, these genetic predictors need to be well selected. The current reported models have acceptable to good discrimination and calibration ability, but whether the models can be applied in clinical practice still needs more validation and adjustment.

  7. Modeling and Prediction of Soil Water Vapor Sorption Isotherms

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per

    2015-01-01

    Soil water vapor sorption isotherms describe the relationship between water activity (aw) and moisture content along adsorption and desorption paths. The isotherms are important for modeling numerous soil processes and are also used to estimate several soil (specific surface area, clay content.......93) for a wide range of soils; and (ii) develop and test regression models for estimating the isotherms from clay content. Preliminary results show reasonable fits of the majority of the investigated empirical and theoretical models to the measured data although some models were not capable to fit both sorption...... directions accurately. Evaluation of the developed prediction equations showed good estimation of the sorption/desorption isotherms for tested soils....

  8. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  9. Modeling and measurement of the motion of the DIII-D vacuum vessel during vertical instabilities

    International Nuclear Information System (INIS)

    Reis, E.; Blevins, R.D.; Jensen, T.H.; Luxon, J.L.; Petersen, P.I.; Strait, E.J.

    1991-11-01

    The motions of the D3-D vacuum vessel during vertical instabilities of elongated plasmas have been measured and studied over the past five years. The currents flowing in the vessel wall and the plasma scrapeoff layer were also measured and correlated to a physics model. These results provide a time history load distribution on the vessel which were input to a dynamic analysis for correlation to the measured motions. The structural model of the vessel using the loads developed from the measured vessel currents showed that the calculated displacement history correlated well with the measured values. The dynamic analysis provides a good estimate of the stresses and the maximum allowable deflection of the vessel. In addition, the vessel motions produce acoustic emissions at 21 Hertz that are sufficiently loud to be felt as well as heard by the D3-D operators. Time history measurements of the sounds were correlated to the vessel displacements. An analytical model of an oscillating sphere provided a reasonable correlation to the amplitude of the measured sounds. The correlation of the theoretical and measured vessel currents, the dynamic measurements and analysis, and the acoustic measurements and analysis show that: (1) The physics model can predict vessel forces for selected values of plasma resistivity. The model also predicts poloidal and toroidal wall currents which agree with measured values; (2) The force-time history from the above model, used in conjunction with an axisymmetric structural model of the vessel, predicts vessel motions which agree well with measured values; (3) The above results, input to a simple acoustic model predicts the magnitude of sounds emitted from the vessel during disruptions which agree with acoustic measurements; (4) Correlation of measured vessel motions with structural analysis shows that a maximum vertical motion of the vessel up to 0.24 in will not overstress the vessel or its supports. 11 refs., 10 figs., 1 tab

  10. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C

    2018-07-01

    Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.

  11. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  12. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  13. Comparison of Physician-Predicted to Measured Low Vision Outcomes

    Science.gov (United States)

    Chan, Tiffany L.; Goldstein, Judith E.; Massof, Robert W.

    2013-01-01

    Purpose To compare low vision rehabilitation (LVR) physicians’ predictions of the probability of success of LVR to patients’ self-reported outcomes after provision of usual outpatient LVR services; and to determine if patients’ traits influence physician ratings. Methods The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre and post-LVR to 316 low vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression and cognitive status. Following patient evaluation, 38 LVR physicians estimated the probability of outcome success (POS), using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients’ baseline traits on predicted outcomes. Results A regression analysis with a hierarchical random effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, Kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed effects regression models show that POS ratings are associated with information about the patient’s cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Conclusions Physicians’ predictions of LVR outcomes appear to be influenced by knowledge of patients’ cognitive functioning and the combination of visual acuity and functional ability - information physicians acquire from the patient’s history and examination. However

  14. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  15. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  16. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  17. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  18. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  19. Interpreting expression data with metabolic flux models: predicting Mycobacterium tuberculosis mycolic acid production.

    Directory of Open Access Journals (Sweden)

    Caroline Colijn

    2009-08-01

    Full Text Available Metabolism is central to cell physiology, and metabolic disturbances play a role in numerous disease states. Despite its importance, the ability to study metabolism at a global scale using genomic technologies is limited. In principle, complete genome sequences describe the range of metabolic reactions that are possible for an organism, but cannot quantitatively describe the behaviour of these reactions. We present a novel method for modeling metabolic states using whole cell measurements of gene expression. Our method, which we call E-Flux (as a combination of flux and expression, extends the technique of Flux Balance Analysis by modeling maximum flux constraints as a function of measured gene expression. In contrast to previous methods for metabolically interpreting gene expression data, E-Flux utilizes a model of the underlying metabolic network to directly predict changes in metabolic flux capacity. We applied E-Flux to Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB. Key components of mycobacterial cell walls are mycolic acids which are targets for several first-line TB drugs. We used E-Flux to predict the impact of 75 different drugs, drug combinations, and nutrient conditions on mycolic acid biosynthesis capacity in M. tuberculosis, using a public compendium of over 400 expression arrays. We tested our method using a model of mycolic acid biosynthesis as well as on a genome-scale model of M. tuberculosis metabolism. Our method correctly predicts seven of the eight known fatty acid inhibitors in this compendium and makes accurate predictions regarding the specificity of these compounds for fatty acid biosynthesis. Our method also predicts a number of additional potential modulators of TB mycolic acid biosynthesis. E-Flux thus provides a promising new approach for algorithmically predicting metabolic state from gene expression data.

  20. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model

    Science.gov (United States)

    Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy

    2010-01-01

    Objective To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Design Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Measurements Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Results Two of the seven factors, ‘organizational motivation’ and ‘meeting user needs,’ were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. Limitations The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. Conclusion The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term. PMID:20962135

  1. A Trap Motion in Validating Muscle Activity Prediction from Musculoskeletal Model using EMG

    NARCIS (Netherlands)

    Wibawa, A. D.; Verdonschot, N.; Halbertsma, J.P.K.; Burgerhof, J.G.M.; Diercks, R.L.; Verkerke, G. J.

    2016-01-01

    Musculoskeletal modeling nowadays is becoming the most common tool for studying and analyzing human motion. Besides its potential in predicting muscle activity and muscle force during active motion, musculoskeletal modeling can also calculate many important kinetic data that are difficult to measure

  2. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  3. Modelling the cutting edge radius size effect for force prediction in micro milling

    DEFF Research Database (Denmark)

    Bissacco, Giuliano; Hansen, Hans Nørgaard; Jan, Slunsky

    2008-01-01

    This paper presents a theoretical model for cutting force prediction in micro milling, taking into account the cutting edge radius size effect, the tool run out and the deviation of the chip flow angle from the inclination angle. A parameterization according to the uncut chip thickness to cutting...... edge radius ratio is used for the parameters involved in the force calculation. The model was verified by means of cutting force measurements in micro milling. The results show good agreement between predicted and measured forces. It is also demonstrated that the use of the Stabler's rule...... is a reasonable approximation and that micro end mill run out is effectively compensated by the deflections induced by the cutting forces....

  4. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Directory of Open Access Journals (Sweden)

    Minh Vu Trieu

    2017-03-01

    Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  5. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  6. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    DEFF Research Database (Denmark)

    Di Angelantonio, Emanuele; Gao, Pei; Khan, Hassan

    2014-01-01

    IMPORTANCE: The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE: To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of c...

  7. new model for solar radiation estimation from measured air

    African Journals Online (AJOL)

    HOD

    RMSE) and correlation ... countries due to the unavailability of measured data in place [3-5]. ... models were used to predict solar radiation in Nigeria by. [12-15]. However ..... "Comparison of Gene Expression Programming with neuro-fuzzy and ...

  8. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  9. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  10. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  11. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...

  12. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  14. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  15. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  16. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  17. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  18. Predicting Bacteria Removal by Enhanced Stormwater Control Measures (SCMs) at the Watershed Scale

    Science.gov (United States)

    Wolfand, J.; Bell, C. D.; Boehm, A. B.; Hogue, T. S.; Luthy, R. G.

    2017-12-01

    Urban stormwater is a major cause of water quality impairment, resulting in surface waters that fail to meet water quality standards and support their designated uses. Fecal indicator bacteria are present in high concentrations in stormwater and are strictly regulated in receiving waters; yet, their fate and transport in urban stormwater is poorly understood. Stormwater control measures (SCMs) are often used to treat, infiltrate, and release urban runoff, but field measurements show that the removal of bacteria by these structural solutions is limited (median log removal = 0.24, n = 370). Researchers have therefore looked to improve bacterial removal by enhancing SCMs through alterations in flow regimes or adding geomedia such as biochar. The present research seeks to develop a model to predict removal of fecal indicator bacteria by enhanced SCMs at the watershed scale in a semi-arid climate. Using the highly developed Ballona Creek watershed (290 km2) located in Los Angeles County as a case study, a hydrologic model is coupled with a stochastic water quality model to predict E. coli concentration near the outfall of the Ballona Creek, Santa Monica Bay. A hydrologic model was developed using EPA SWMM, calibrated for flow from water year 1998-2006 (NSE = 0.94; R2 = 0.94), and validated from water year 2007-2015 (NSE = 0.90; R2 = 0.93). This bacterial loading model was then linked to EPA SUSTAIN and a SCM bacterial removal script to simulate log removal of bacteria by various SCMs and predict bacterial concentrations in Ballona Creek. Preliminary results suggest small enhancements to SCMs that improve bacterial removal (<0.5 log removal) may offer large benefits to surface water quality and enable communities such as Los Angeles to meet their regulatory requirements.

  19. Bayesian uncertainty assessment of flood predictions in ungauged urban basins for conceptual rainfall-runoff models

    Directory of Open Access Journals (Sweden)

    A. E. Sikorska

    2012-04-01

    Full Text Available Urbanization and the resulting land-use change strongly affect the water cycle and runoff-processes in watersheds. Unfortunately, small urban watersheds, which are most affected by urban sprawl, are mostly ungauged. This makes it intrinsically difficult to assess the consequences of urbanization. Most of all, it is unclear how to reliably assess the predictive uncertainty given the structural deficits of the applied models. In this study, we therefore investigate the uncertainty of flood predictions in ungauged urban basins from structurally uncertain rainfall-runoff models. To this end, we suggest a procedure to explicitly account for input uncertainty and model structure deficits using Bayesian statistics with a continuous-time autoregressive error model. In addition, we propose a concise procedure to derive prior parameter distributions from base data and successfully apply the methodology to an urban catchment in Warsaw, Poland. Based on our results, we are able to demonstrate that the autoregressive error model greatly helps to meet the statistical assumptions and to compute reliable prediction intervals. In our study, we found that predicted peak flows were up to 7 times higher than observations. This was reduced to 5 times with Bayesian updating, using only few discharge measurements. In addition, our analysis suggests that imprecise rainfall information and model structure deficits contribute mostly to the total prediction uncertainty. In the future, flood predictions in ungauged basins will become more important due to ongoing urbanization as well as anthropogenic and climatic changes. Thus, providing reliable measures of uncertainty is crucial to support decision making.

  20. Global Atmosphere Watch Workshop on Measurement-Model ...

    Science.gov (United States)

    The World Meteorological Organization’s (WMO) Global Atmosphere Watch (GAW) Programme coordinates high-quality observations of atmospheric composition from global to local scales with the aim to drive high-quality and high-impact science while co-producing a new generation of products and services. In line with this vision, GAW’s Scientific Advisory Group for Total Atmospheric Deposition (SAG-TAD) has a mandate to produce global maps of wet, dry and total atmospheric deposition for important atmospheric chemicals to enable research into biogeochemical cycles and assessments of ecosystem and human health effects. The most suitable scientific approach for this activity is the emerging technique of measurement-model fusion for total atmospheric deposition. This technique requires global-scale measurements of atmospheric trace gases, particles, precipitation composition and precipitation depth, as well as predictions of the same from global/regional chemical transport models. The fusion of measurement and model results requires data assimilation and mapping techniques. The objective of the GAW Workshop on Measurement-Model Fusion for Global Total Atmospheric Deposition (MMF-GTAD), an initiative of the SAG-TAD, was to review the state-of-the-science and explore the feasibility and methodology of producing, on a routine retrospective basis, global maps of atmospheric gas and aerosol concentrations as well as wet, dry and total deposition via measurement-model

  1. Arid site water balance: evapotranspiration modeling and measurements

    International Nuclear Information System (INIS)

    Gee, G.W.; Kirkham, R.R.

    1984-09-01

    In order to evaluate the magnitude of radionuclide transport at an aird site, a field and modeling study was conducted to measure and predict water movement under vegetated and bare soil conditions. Significant quantities of water were found to move below the roo of a shallow-rooted grass-covered area during wet years at the Hanford site. The unsaturated water flow model, UNSAT-1D, was resonably successful in simulating the transient behavior of the water balance at this site. The effects of layered soils on water balance were demonstrated using the model. Models used to evaluate water balance in arid regions should not rely on annual averages and assume that all precipitation is removed by evapotranspiration. The potential for drainage at arid sites exists under conditions where shallow rooted plants grow on coarse textured soils. This condition was observed at our study site at Hanford. Neutron probe data collected on a cheatgrass community at the Hanford site during a wet year indicated that over 5 cm of water drained below the 3.5-m depth. The unsaturated water flow model, UNSAT-1D, predicted water drainage of about 5 cm (single layer, 10 months) and 3.5 cm (two layers, 12 months) for the same time period. Additional field measurements of hydraulic conductivity will likely improve the drainage estimate made by UNSAT-1D. Additional information describing cheatgrass growth and water use at the grass site could improve model predictions of sink terms and subsequent calculations of water storage within the rooting zone. In arid areas where the major part of the annual precipitation occurs during months with low average potential evapotranspiration and where soils are vegetated but are coarse textured and well drained, significant drainage can occur. 31 references, 18 figures, 1 table

  2. Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico

    Science.gov (United States)

    Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.

    2003-01-01

    Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire

  3. Ages and transit times as important diagnostics of model performance for predicting carbon dynamics in terrestrial vegetation models

    Science.gov (United States)

    Ceballos-Núñez, Verónika; Richardson, Andrew D.; Sierra, Carlos A.

    2018-03-01

    The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike), ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12-20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure

  4. Development of Predictive Models of Injury for the Lower Extremity, Lumbar, and Thoracic Spine after discharge from Physical Rehabilitation

    Science.gov (United States)

    2017-10-01

    prediction models will vary by age and sex . Hypothesis 3: A multi-factorial prediction model that accurately predicts risk of new and recurring injuries, as...cleared to return to duty from an injury is of great importance. The purpose of this project is to determine if performance on a battery of...balance screens, measures of power, demographic data and biopsychosocial measures. • Injury data will be collected through self -report, profile data, and

  5. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  6. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  7. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  8. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    Science.gov (United States)

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  9. Receiver Operating Characteristic Curve-Based Prediction Model for Periodontal Disease Updated With the Calibrated Community Periodontal Index.

    Science.gov (United States)

    Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng

    2017-12-01

    The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.

  10. An Investigation of Multi-Satellite Stratospheric Measurements on Tropospheric Weather Predictions over Continental United States

    Science.gov (United States)

    Shao, Min

    The troposphere and stratosphere are the two closest atmospheric layers to the Earth's surface. These two layers are separated by the so-called tropopause. On one hand, these two layers are largely distinguished, on the other hand, lots of evidences proved that connections are also existed between these two layers via various dynamical and chemical feedbacks. Both tropospheric and stratospheric waves can propagate through the tropopause and affect the down streams, despite the fact that this propagation of waves is relatively weaker than the internal interactions in both atmospheric layers. Major improvements have been made in numerical weather predictions (NWP) via data assimilation (DA) in the past 30 years. From optimal interpolation to variational methods and Kalman Filter, great improvements are also made in the development of DA technology. The availability of assimilating satellite radiance observation and the increasing amount of satellite measurements enabled the generation of better atmospheric initials for both global and regional NWP systems. The selection of DA schemes is critical for regional NWP systems. The performance of three major data assimilation (3D-Var, Hybrid, and EnKF) schemes on regional weather forecasts over the continental United States during winter and summer is investigated. Convergence rate in the variational methods can be slightly accelerated especially in summer by the inclusion of ensembles. When the regional model lid is set at 50-mb, larger improvements (10˜20%) in the initials are obtained over the tropopause and lower troposphere. Better forecast skills (˜10%) are obtained in all three DA schemes in summer. Among these three DA schemes, slightly better (˜1%) forecast skills are obtained in Hybrid configuration than 3D-Var. Overall better forecast skills are obtained in summer via EnKF scheme. An extra 22% skill in predicting summer surface pressure but 10% less skills in winter are given by EnKF when compared to 3D

  11. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    Science.gov (United States)

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  12. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  13. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  14. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  15. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  16. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  17. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  18. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    Science.gov (United States)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without

  19. A Bayesian approach to modeling and predicting pitting flaws in steam generator tubes

    International Nuclear Information System (INIS)

    Yuan, X.-X.; Mao, D.; Pandey, M.D.

    2009-01-01

    Steam generators in nuclear power plants have experienced varying degrees of under-deposit pitting corrosion. A probabilistic model to accurately predict pitting damage is necessary for effective life-cycle management of steam generators. This paper presents an advanced probabilistic model of pitting corrosion characterizing the inherent randomness of the pitting process and measurement uncertainties of the in-service inspection (ISI) data obtained from eddy current (EC) inspections. A Markov chain Monte Carlo simulation-based Bayesian method, enhanced by a data augmentation technique, is developed for estimating the model parameters. The proposed model is able to predict the actual pit number, the actual pit depth as well as the maximum pit depth, which is the main interest of the pitting corrosion model. The study also reveals the significance of inspection uncertainties in the modeling of pitting flaws using the ISI data: Without considering the probability-of-detection issues and measurement errors, the leakage risk resulted from the pitting corrosion would be under-estimated, despite the fact that the actual pit depth would usually be over-estimated.

  20. Development of a clinical prediction model to calculate patient life expectancy: the measure of actuarial life expectancy (MALE).

    Science.gov (United States)

    Clarke, M G; Kennedy, K P; MacDonagh, R P

    2009-01-01

    To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.

  1. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  2. Prediction of work metabolism from heart rate measurements in forest work: some practical methodological issues.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Auger, Isabelle; Leone, Mario

    2015-01-01

    Individual heart rate (HR) to workload relationships were determined using 93 submaximal step-tests administered to 26 healthy participants attending physical activities in a university training centre (laboratory study) and 41 experienced forest workers (field study). Predicted maximum aerobic capacity (MAC) was compared to measured MAC from a maximal treadmill test (laboratory study) to test the effect of two age-predicted maximum HR Equations (220-age and 207-0.7 × age) and two clothing insulation levels (0.4 and 0.91 clo) during the step-test. Work metabolism (WM) estimated from forest work HR was compared against concurrent work V̇O2 measurements while taking into account the HR thermal component. Results show that MAC and WM can be accurately predicted from work HR measurements and simple regression models developed in this study (1% group mean prediction bias and up to 25% expected prediction bias for a single individual). Clothing insulation had no impact on predicted MAC nor age-predicted maximum HR equations. Practitioner summary: This study sheds light on four practical methodological issues faced by practitioners regarding the use of HR methodology to assess WM in actual work environments. More specifically, the effect of wearing work clothes and the use of two different maximum HR prediction equations on the ability of a submaximal step-test to assess MAC are examined, as well as the accuracy of using an individual's step-test HR to workload relationship to predict WM from HR data collected during actual work in the presence of thermal stress.

  3. Predicting People's Environmental Behaviour: Theory of Planned Behaviour and Model of Responsible Environmental Behaviour

    Science.gov (United States)

    Chao, Yu-Long

    2012-01-01

    Using different measures of self-reported and other-reported environmental behaviour (EB), two important theoretical models explaining EB--Hines, Hungerford and Tomera's model of responsible environmental behaviour (REB) and Ajzen's theory of planned behaviour (TPB)--were compared regarding the fit between model and data, predictive ability,…

  4. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  5. Modelling the electrical properties of concrete for shielding effectiveness prediction

    International Nuclear Information System (INIS)

    Sandrolini, L; Reggiani, U; Ogunsola, A

    2007-01-01

    Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed

  6. Predicting the cosmological constant with the scale-factor cutoff measure

    International Nuclear Information System (INIS)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.; Vilenkin, Alexander

    2008-01-01

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant Λ gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of Λ depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes' (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of Λ, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of Λ that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter Ω, indicating that with this measure there is a possibility of detectable negative curvature.

  7. Over Time, Do Anthropometric Measures Still Predict Diabetes Incidence in Chinese Han Nationality Population from Chengdu Community?

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2013-01-01

    Full Text Available Objective. To examine whether anthropometric measures could predict diabetes incidence in a Chinese population during a 15-year follow-up. Design and Methods. The data were collected in 1992 and then again in 2007 from the same group of 687 individuals. Waist circumference, body mass index, waist to hip ratio, and waist to height ratio were collected based on a standard protocol. To assess the effects of baseline anthropometric measures on the new onset of diabetes, Cox's proportional hazards regression models were used to estimate the hazard ratios of them, and the discriminatory power of anthropometric measures for diabetes was assessed by the area under the receiver operating curve (AROC. Results. Seventy-four individuals were diagnosed with diabetes during a 15-year follow-up period (incidence: 10.8%. These anthropometric measures also predicted future diabetes during a long follow-up (. At 7-8 years, the AROC of central obesity measures (WC, WHpR, WHtR were higher than that of general obesity measures (BMI (. But, there were no significant differences among the four anthropometric measurements at 15 years. Conclusions. These anthropometric measures could still predict diabetes with a long time follow-up. However, the validity of anthropometric measures to predict incident diabetes may change with time.

  8. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  9. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  11. Predicting medical complications after spine surgery: a validated model using a prospective surgical registry.

    Science.gov (United States)

    Lee, Michael J; Cizik, Amy M; Hamilton, Deven; Chapman, Jens R

    2014-02-01

    The possibility and likelihood of a postoperative medical complication after spine surgery undoubtedly play a major role in the decision making of the surgeon and patient alike. Although prior study has determined relative risk and odds ratio values to quantify risk factors, these values may be difficult to translate to the patient during counseling of surgical options. Ideally, a model that predicts absolute risk of medical complication, rather than relative risk or odds ratio values, would greatly enhance the discussion of safety of spine surgery. To date, there is no risk stratification model that specifically predicts the risk of medical complication. The purpose of this study was to create and validate a predictive model for the risk of medical complication during and after spine surgery. Statistical analysis using a prospective surgical spine registry that recorded extensive demographic, surgical, and complication data. Outcomes examined are medical complications that were specifically defined a priori. This analysis is a continuation of statistical analysis of our previously published report. Using a prospectively collected surgical registry of more than 1,476 patients with extensive demographic, comorbidity, surgical, and complication detail recorded for 2 years after surgery, we previously identified several risk factor for medical complications. Using the beta coefficients from those log binomial regression analyses, we created a model to predict the occurrence of medical complication after spine surgery. We split our data into two subsets for internal and cross-validation of our model. We created two predictive models: one predicting the occurrence of any medical complication and the other predicting the occurrence of a major medical complication. The final predictive model for any medical complications had a receiver operator curve characteristic of 0.76, considered to be a fair measure. The final predictive model for any major medical complications had

  12. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Risk prediction models for mortality in patients with ventilator-associated pneumonia

    DEFF Research Database (Denmark)

    Larsson, Johan E; Itenov, Theis Skovsgaard; Bestle, Morten Heiberg

    2017-01-01

    the receiver operator characteristic curve (AUC). RESULTS: We identified 19 articles studying 7 different models' ability to predict mortality in VAP patients. The models were Acute Physiology and Chronic Health Evaluation (APACHE) II (9 studies, n = 1398); Clinical Pulmonary Infection Score (4 studies, n...... = 303); "Immunodeficiency, Blood pressure, Multilobular infiltrates on chest radiograph, Platelets and hospitalization 10 days before onset of VAP" (3 studies, n = 406); "VAP Predisposition, Insult Response and Organ dysfunction" (2 studies, n = 589); Sequential Organ Failure Assessment (7 studies, n......: The PubMed and EMBASE were searched in February 2016. We included studies in English that evaluated models' ability to predict the risk of mortality in patients with VAP. The reported mortality with the longest follow-up was used in the meta-analysis. Prognostic accuracy was measured with the area under...

  14. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  15. Predictive models for the assessment of occupational exposure to chemicals: A new challenge for employers

    Directory of Open Access Journals (Sweden)

    Jan Piotr Gromiec

    2013-10-01

    Full Text Available Employers are obliged to carry out and document the risk associated with the use of chemical substances. The best but the most expensive method is to measure workplace concentrations of chemicals. At present no "measureless" method for risk assessment is available in Poland, but predictive models for such assessments have been developed in some countries. The purpose of this work is to review and evaluate the applicability of selected predictive methods for assessing occupational inhalation exposure and related risk to check the compliance with Occupational Exposure Limits (OELs, as well as the compliance with REACH obligations. Based on the literature data HSE COSHH Essentials, EASE, ECETOC TRA, Stoffenmanager, and EMKG-Expo-Tool were evaluated. The data on validation of predictive models were also examined. It seems that predictive models may be used as a useful method for Tier 1 assessment of occupational exposure by inhalation. Since the levels of exposure are frequently overestimated, they should be considered as "rational worst cases" for selection of proper control measures. Bearing in mind that the number of available exposure scenarios and PROC categories is limited, further validation by field surveys is highly recommended. Predictive models may serve as a good tool for preliminary risk assessment and selection of the most appropriate risk control measures in Polish small and medium size enterprises (SMEs providing that they are available in the Polish language. This also requires an extensive training of their future users. Med Pr 2013;64(5:699–716

  16. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  17. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  18. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  19. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  20. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  1. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    Science.gov (United States)

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  2. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  3. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  4. A regional neural network model for predicting mean daily river water temperature

    Science.gov (United States)

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  5. Modelling for the Stripa site characterization and validation drift inflow: prediction of flow through fractured rock

    International Nuclear Information System (INIS)

    Herbert, A.; Gale, J.; MacLeod, R.; Lanyon, G.

    1991-12-01

    We present our approach to predicting flow through a fractured rock site; the site characterization and validation region in the Stripa mine. Our approach is based on discrete fracture network modelling using the NAPSAC computer code. We describe the conceptual models and assumptions that we have used to interpret the geometry and flow properties of the fracture networks, from measurements at the site. These are used to investigate large scale properties of the network and we show that for flows on scales larger than about 10 m, porous medium approximation should be used. The porous medium groundwater flow code CFEST is used to predict the large scale flows through the mine and the SCV region. This, in turn, is used to provide boundary conditions for more detailed models, which predict the details of flow, using a discrete fracture network model, on scales of less than 10 m. We conclude that a fracture network approach is feasible and that it provides a better understanding of details of flow than conventional porous medium approaches and a quantification of the uncertainty associated with predictive flow modelling characterised from field measurement in fractured rock. (au)

  6. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  7. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  8. Predictive modelling of gene expression from transcriptional regulatory elements.

    Science.gov (United States)

    Budden, David M; Hurley, Daniel G; Crampin, Edmund J

    2015-07-01

    Predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identifying dysregulation of gene and miRNA expression associated with abnormal patterns of transcription factor (TF) binding or nucleosomal histone modifications (HMs). Despite the growing popularity of such approaches, a comparative review of the various modelling algorithms and feature extraction methods is lacking. We define and compare three methods of quantifying pairwise gene-TF/HM interactions and discuss their suitability for integrating the heterogeneous chromatin immunoprecipitation (ChIP)-seq binding patterns exhibited by TFs and HMs. We then construct log-linear and ϵ-support vector regression models from various mouse embryonic stem cell (mESC) and human lymphoblastoid (GM12878) data sets, considering both ChIP-seq- and position weight matrix- (PWM)-derived in silico TF-binding. The two algorithms are evaluated both in terms of their modelling prediction accuracy and ability to identify the established regulatory roles of individual TFs and HMs. Our results demonstrate that TF-binding and HMs are highly predictive of gene expression as measured by mRNA transcript abundance, irrespective of algorithm or cell type selection and considering both ChIP-seq and PWM-derived TF-binding. As we encourage other researchers to explore and develop these results, our framework is implemented using open-source software and made available as a preconfigured bootable virtual environment. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models

    Science.gov (United States)

    Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.

    2011-12-01

    External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.

  10. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  11. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  12. Statistical model for prediction of hearing loss in patients receiving cisplatin chemotherapy.

    Science.gov (United States)

    Johnson, Andrew; Tarima, Sergey; Wong, Stuart; Friedland, David R; Runge, Christina L

    2013-03-01

    This statistical model might be used to predict cisplatin-induced hearing loss, particularly in patients undergoing concomitant radiotherapy. To create a statistical model based on pretreatment hearing thresholds to provide an individual probability for hearing loss from cisplatin therapy and, secondarily, to investigate the use of hearing classification schemes as predictive tools for hearing loss. Retrospective case-control study. Tertiary care medical center. A total of 112 subjects receiving chemotherapy and audiometric evaluation were evaluated for the study. Of these subjects, 31 met inclusion criteria for analysis. The primary outcome measurement was a statistical model providing the probability of hearing loss following the use of cisplatin chemotherapy. Fifteen of the 31 subjects had significant hearing loss following cisplatin chemotherapy. American Academy of Otolaryngology-Head and Neck Society and Gardner-Robertson hearing classification schemes revealed little change in hearing grades between pretreatment and posttreatment evaluations for subjects with or without hearing loss. The Chang hearing classification scheme could effectively be used as a predictive tool in determining hearing loss with a sensitivity of 73.33%. Pretreatment hearing thresholds were used to generate a statistical model, based on quadratic approximation, to predict hearing loss (C statistic = 0.842, cross-validated = 0.835). The validity of the model improved when only subjects who received concurrent head and neck irradiation were included in the analysis (C statistic = 0.91). A calculated cutoff of 0.45 for predicted probability has a cross-validated sensitivity and specificity of 80%. Pretreatment hearing thresholds can be used as a predictive tool for cisplatin-induced hearing loss, particularly with concomitant radiotherapy.

  13. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  14. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  15. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  16. Development and evaluation of a regression-based model to predict cesium-137 concentration ratios for saltwater fish

    International Nuclear Information System (INIS)

    Pinder, John E.; Rowan, David J.; Smith, Jim T.

    2016-01-01

    Data from published studies and World Wide Web sources were combined to develop a regression model to predict "1"3"7Cs concentration ratios for saltwater fish. Predictions were developed from 1) numeric trophic levels computed primarily from random resampling of known food items and 2) K concentrations in the saltwater for 65 samplings from 41 different species from both the Atlantic and Pacific Oceans. A number of different models were initially developed and evaluated for accuracy which was assessed as the ratios of independently measured concentration ratios to those predicted by the model. In contrast to freshwater systems, were K concentrations are highly variable and are an important factor in affecting fish concentration ratios, the less variable K concentrations in saltwater were relatively unimportant in affecting concentration ratios. As a result, the simplest model, which used only trophic level as a predictor, had comparable accuracies to more complex models that also included K concentrations. A test of model accuracy involving comparisons of 56 published concentration ratios from 51 species of marine fish to those predicted by the model indicated that 52 of the predicted concentration ratios were within a factor of 2 of the observed concentration ratios. - Highlights: • We developed a model to predict concentration ratios (C_r) for saltwater fish. • The model requires only a single input variable to predict C_r. • That variable is a mean numeric trophic level available at (fishbase.org). • The K concentrations in seawater were not an important predictor variable. • The median-to observed ratio for 56 independently measured C_r was 0.83.

  17. Predicting personal exposure to airborne carbonyls using residential measurements and time/activity data

    Science.gov (United States)

    Liu, Weili; Zhang, Junfeng (Jim); Korn, Leo R.; Zhang, Lin; Weisel, Clifford P.; Turpin, Barbara; Morandi, Maria; Stock, Tom; Colome, Steve

    As a part of the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study, 48 h integrated residential indoor, outdoor, and personal exposure concentrations of 10 carbonyls were simultaneously measured in 234 homes selected from three US cities using the Passive Aldehydes and Ketones Samplers (PAKS). In this paper, we examine the feasibility of using residential indoor concentrations to predict personal exposures to carbonyls. Based on paired t-tests, the means of indoor concentrations were not different from those of personal exposure concentrations for eight out of the 10 measured carbonyls, indicating indoor carbonyls concentrations, in general, well predicted the central tendency of personal exposure concentrations. In a linear regression model, indoor concentrations explained 47%, 55%, and 65% of personal exposure variance for formaldehyde, acetaldehyde, and hexaldehyde, respectively. The predictability of indoor concentrations on cross-individual variability in personal exposure for the other carbonyls was poorer, explainingexposure concentrations. It was found that activities related to driving a vehicle and performing yard work had significant impacts on personal exposures to a few carbonyls.

  18. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  19. Thoracolumbar spine model with articulated ribcage for the prediction of dynamic spinal loading.

    Science.gov (United States)

    Ignasiak, Dominika; Dendorfer, Sebastian; Ferguson, Stephen J

    2016-04-11

    Musculoskeletal modeling offers an invaluable insight into the spine biomechanics. A better understanding of thoracic spine kinetics is essential for understanding disease processes and developing new prevention and treatment methods. Current models of the thoracic region are not designed for segmental load estimation, or do not include the complex construct of the ribcage, despite its potentially important role in load transmission. In this paper, we describe a numerical musculoskeletal model of the thoracolumbar spine with articulated ribcage, modeled as a system of individual vertebral segments, elastic elements and thoracic muscles, based on a previously established lumbar spine model and data from the literature. The inverse dynamics simulations of the model allow the prediction of spinal loading as well as costal joints kinetics and kinematics. The intradiscal pressure predicted by the model correlated well (R(2)=0.89) with reported intradiscal pressure measurements, providing a first validation of the model. The inclusion of the ribcage did not affect segmental force predictions when the thoracic spine did not perform motion. During thoracic motion tasks, the ribcage had an important influence on the predicted compressive forces and muscle activation patterns. The compressive forces were reduced by up to 32%, or distributed more evenly between thoracic vertebrae, when compared to the predictions of the model without ribcage, for mild thoracic flexion and hyperextension tasks, respectively. The presented musculoskeletal model provides a tool for investigating thoracic spine loading and load sharing between vertebral column and ribcage during dynamic activities. Further validation for specific applications is still necessary. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    Science.gov (United States)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  1. Predicting Story Goodness Performance from Cognitive Measures Following Traumatic Brain Injury

    Science.gov (United States)

    Le, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan

    2012-01-01

    Purpose: This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Le, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. Method: One hundred…

  2. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  3. Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves

    Energy Technology Data Exchange (ETDEWEB)

    Eldiwany, B.; Alvarez, P.D. [Kalsi Engineering Inc., Sugar Land, TX (United States); Wolfe, K. [Electric Power Research Institute, Palo Alto, CA (United States)

    1996-12-01

    An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging and unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during {Delta}P closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data.

  4. Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves

    International Nuclear Information System (INIS)

    Eldiwany, B.; Alvarez, P.D.; Wolfe, K.

    1996-01-01

    An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging and unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during ΔP closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data

  5. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    Science.gov (United States)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  6. The Gtr-Model a Universal Framework for Quantum-Like Measurements

    Science.gov (United States)

    Aerts, Diederik; Bianchi, Massimiliano Sassoli De

    We present a very general geometrico-dynamical description of physical or more abstract entities, called the general tension-reduction (GTR) model, where not only states, but also measurement-interactions can be represented, and the associated outcome probabilities calculated. Underlying the model is the hypothesis that indeterminism manifests as a consequence of unavoidable uctuations in the experimental context, in accordance with the hidden-measurements interpretation of quantum mechanics. When the structure of the state space is Hilbertian, and measurements are of the universal kind, i.e., are the result of an average over all possible ways of selecting an outcome, the GTR-model provides the same predictions of the Born rule, and therefore provides a natural completed version of quantum mechanics. However, when the structure of the state space is non-Hilbertian and/or not all possible ways of selecting an outcome are available to be actualized, the predictions of the model generally differ from the quantum ones, especially when sequential measurements are considered. Some paradigmatic examples will be discussed, taken from physics and human cognition. Particular attention will be given to some known psychological effects, like question order effects and response replicability, which we show are able to generate non-Hilbertian statistics. We also suggest a realistic interpretation of the GTR-model, when applied to human cognition and decision, which we think could become the generally adopted interpretative framework in quantum cognition research.

  7. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  8. Electrical resistivity measurements to predict abrasion resistance

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science; Volume 31; Issue 2. Electrical resistivity measurements to predict abrasion resistance of rock aggregates ... It was seen that correlation coefficients were increased for the rock classes. In addition ...

  9. Advancement in Watershed Modelling Using Dynamic Lateral and Longitudinal Sediment (Dis)connectivity Prediction

    Science.gov (United States)

    Mahoney, D. T.; al Aamery, N. M. H.; Fox, J.

    2017-12-01

    The authors find that sediment (dis)connectivity has seldom taken precedence within watershed models, and the present study advances this modeling framework and applies the modeling within a bedrock-controlled system. Sediment (dis)connectivity, defined as the detachment and transport of sediment from source to sink between geomorphic zones, is a major control on sediment transport. Given the availability of high resolution geospatial data, coupling sediment connectivity concepts within sediment prediction models offers an approach to simulate sediment sources and pathways within a watershed's sediment cascade. Bedrock controlled catchments are potentially unique due to the presence of rock outcrops causing longitudinal impedance to sediment transport pathways in turn impacting the longitudinal distribution of the energy gradient responsible for conveying sediment. Therefore, the authors were motivated by the need to formulate a sediment transport model that couples sediment (dis)connectivity knowledge to predict sediment flux for bedrock controlled catchments. A watershed-scale sediment transport model was formulated that incorporates sediment (dis)connectivity knowledge collected via field reconnaissance and predicts sediment flux through coupling with the Partheniades equation and sediment continuity model. Sediment (dis)connectivity was formulated by coupling probabilistic upland lateral connectivity prediction with instream longitudinal connectivity assessments via discretization of fluid and sediment pathways. Flux predictions from the upland lateral connectivity model served as an input to the instream longitudinal connectivity model. Disconnectivity in the instream model was simulated via the discretization of stream reaches due to barriers such as bedrock outcroppings and man-made check dams. The model was tested for a bedrock controlled catchment in Kentucky, USA for which extensive historic water and sediment flux data was available. Predicted sediment

  10. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    Science.gov (United States)

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  11. TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction

    Science.gov (United States)

    Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe

    2017-05-01

    This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.

  12. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  13. A Predictive Model for Yeast Cell Polarization in Pheromone Gradients.

    Science.gov (United States)

    Muller, Nicolas; Piel, Matthieu; Calvez, Vincent; Voituriez, Raphaël; Gonçalves-Sá, Joana; Guo, Chin-Lin; Jiang, Xingyu; Murray, Andrew; Meunier, Nicolas

    2016-04-01

    Budding yeast cells exist in two mating types, a and α, which use peptide pheromones to communicate with each other during mating. Mating depends on the ability of cells to polarize up pheromone gradients, but cells also respond to spatially uniform fields of pheromone by polarizing along a single axis. We used quantitative measurements of the response of a cells to α-factor to produce a predictive model of yeast polarization towards a pheromone gradient. We found that cells make a sharp transition between budding cycles and mating induced polarization and that they detect pheromone gradients accurately only over a narrow range of pheromone concentrations corresponding to this transition. We fit all the parameters of the mathematical model by using quantitative data on spontaneous polarization in uniform pheromone concentration. Once these parameters have been computed, and without any further fit, our model quantitatively predicts the yeast cell response to pheromone gradient providing an important step toward understanding how cells communicate with each other.

  14. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  15. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  16. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  17. Measurement and modeling of gamma-absorbed doses due to atmospheric releases from Los Alamos Meson Physics Facility

    International Nuclear Information System (INIS)

    Bowen, B.M.; Chen, A.I.; Olsen, W.A.; Van Etten, D.M.

    1985-01-01

    Short-term gamma-absorbed doses were measured by one high-pressure ionization chamber (HPIC) at an azimuth of 12 0 from the Los Alamos Meson Physics Facility (LAMPF) stack during the January 1 through February 8 operating cycle. Two HPICs were in the field during the September 8 through December 31 operating cycle, one north and the other north-northeast of the LAMPF stack, but they did not provide reliable data. Meteorological data were also measured at both East Gate and LAMPF. Airborne emission data were taken at the stack. Daily model predictions, based on the integration of modeled 15-min periods, were made for the first LAMPF operating cycle and were compared with the measured data. A comparison of the predicted and measured daily gamma doses due to LAMPF emissions is presented. There is very good correlation between measured and predicted values. During 39-day operating cycles, the model predicted an absorbed dose of 10.3 mrad compared with the 8.8 mrad that was measured, an overprediction of 17%

  18. A Bayesian Spatial Model to Predict Disease Status Using Imaging Data From Various Modalities

    Directory of Open Access Journals (Sweden)

    Wenqiong Xue

    2018-03-01

    Full Text Available Relating disease status to imaging data stands to increase the clinical significance of neuroimaging studies. Many neurological and psychiatric disorders involve complex, systems-level alterations that manifest in functional and structural properties of the brain and possibly other clinical and biologic measures. We propose a Bayesian hierarchical model to predict disease status, which is able to incorporate information from both functional and structural brain imaging scans. We consider a two-stage whole brain parcellation, partitioning the brain into 282 subregions, and our model accounts for correlations between voxels from different brain regions defined by the parcellations. Our approach models the imaging data and uses posterior predictive probabilities to perform prediction. The estimates of our model parameters are based on samples drawn from the joint posterior distribution using Markov Chain Monte Carlo (MCMC methods. We evaluate our method by examining the prediction accuracy rates based on leave-one-out cross validation, and we employ an importance sampling strategy to reduce the computation time. We conduct both whole-brain and voxel-level prediction and identify the brain regions that are highly associated with the disease based on the voxel-level prediction results. We apply our model to multimodal brain imaging data from a study of Parkinson's disease. We achieve extremely high accuracy, in general, and our model identifies key regions contributing to accurate prediction including caudate, putamen, and fusiform gyrus as well as several sensory system regions.

  19. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  20. Aquatic Exposure Predictions of Insecticide Field Concentrations Using a Multimedia Mass-Balance Model.

    Science.gov (United States)

    Knäbel, Anja; Scheringer, Martin; Stehle, Sebastian; Schulz, Ralf

    2016-04-05

    Highly complex process-driven mechanistic fate and transport models and multimedia mass balance models can be used for the exposure prediction of pesticides in different environmental compartments. Generally, both types of models differ in spatial and temporal resolution. Process-driven mechanistic fate models are very complex, and calculations are time-intensive. This type of model is currently used within the European regulatory pesticide registration (FOCUS). Multimedia mass-balance models require fewer input parameters to calculate concentration ranges and the partitioning between different environmental media. In this study, we used the fugacity-based small-region model (SRM) to calculate predicted environmental concentrations (PEC) for 466 cases of insecticide field concentrations measured in European surface waters. We were able to show that the PECs of the multimedia model are more protective in comparison to FOCUS. In addition, our results show that the multimedia model results have a higher predictive power to simulate varying field concentrations at a higher level of field relevance. The adaptation of the model scenario to actual field conditions suggests that the performance of the SRM increases when worst-case conditions are replaced by real field data. Therefore, this study shows that a less complex modeling approach than that used in the regulatory risk assessment exhibits a higher level of protectiveness and predictiveness and that there is a need to develop and evaluate new ecologically relevant scenarios in the context of pesticide exposure modeling.

  1. Kalman Filter or VAR Models to Predict Unemployment Rate in Romania?

    Directory of Open Access Journals (Sweden)

    Simionescu Mihaela

    2015-06-01

    Full Text Available This paper brings to light an economic problem that frequently appears in practice: For the same variable, more alternative forecasts are proposed, yet the decision-making process requires the use of a single prediction. Therefore, a forecast assessment is necessary to select the best prediction. The aim of this research is to propose some strategies for improving the unemployment rate forecast in Romania by conducting a comparative accuracy analysis of unemployment rate forecasts based on two quantitative methods: Kalman filter and vector-auto-regressive (VAR models. The first method considers the evolution of unemployment components, while the VAR model takes into account the interdependencies between the unemployment rate and the inflation rate. According to the Granger causality test, the inflation rate in the first difference is a cause of the unemployment rate in the first difference, these data sets being stationary. For the unemployment rate forecasts for 2010-2012 in Romania, the VAR models (in all variants of VAR simulations determined more accurate predictions than Kalman filter based on two state space models for all accuracy measures. According to mean absolute scaled error, the dynamic-stochastic simulations used in predicting unemployment based on the VAR model are the most accurate. Another strategy for improving the initial forecasts based on the Kalman filter used the adjusted unemployment data transformed by the application of the Hodrick-Prescott filter. However, the use of VAR models rather than different variants of the Kalman filter methods remains the best strategy in improving the quality of the unemployment rate forecast in Romania. The explanation of these results is related to the fact that the interaction of unemployment with inflation provides useful information for predictions of the evolution of unemployment related to its components (i.e., natural unemployment and cyclical component.

  2. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  3. Measurements and Modeling of Conducted EMI in a Buck Chopper

    International Nuclear Information System (INIS)

    Fakhfakh, L.; Abid, S.; Ammous, A.

    2011-01-01

    The high increase of power electronic devices use (speed control, lighting, heating, automotive, etc...) requires the electrical, thermal and electromagnetic behavior studies. In this paper we developed a model to predict the conducted EMI level in a DC/DC converter. Measurement methodology was done using a network analyzer in order to evaluate the equivalent impedance model of each converter element. The full circuit model is then implemented in the Saber-trademark simulation tool using time domain simulation followed by fast Fourier transformation (FFT) in the frequency range 150 KHz -100 MHz. A comparison between simulation results and those obtained by measurements is used to validate the developed model. (author)

  4. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling.

    Science.gov (United States)

    Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    Mind-wandering (MW), task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG) variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR) to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  5. Prediction of Mind-Wandering with Electroencephalogram and Non-linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Issaku Kawashima

    2017-07-01

    Full Text Available Mind-wandering (MW, task-unrelated thought, has been examined by researchers in an increasing number of articles using models to predict whether subjects are in MW, using numerous physiological variables. However, these models are not applicable in general situations. Moreover, they output only binary classification. The current study suggests that the combination of electroencephalogram (EEG variables and non-linear regression modeling can be a good indicator of MW intensity. We recorded EEGs of 50 subjects during the performance of a Sustained Attention to Response Task, including a thought sampling probe that inquired the focus of attention. We calculated the power and coherence value and prepared 35 patterns of variable combinations and applied Support Vector machine Regression (SVR to them. Finally, we chose four SVR models: two of them non-linear models and the others linear models; two of the four models are composed of a limited number of electrodes to satisfy model usefulness. Examination using the held-out data indicated that all models had robust predictive precision and provided significantly better estimations than a linear regression model using single electrode EEG variables. Furthermore, in limited electrode condition, non-linear SVR model showed significantly better precision than linear SVR model. The method proposed in this study helps investigations into MW in various little-examined situations. Further, by measuring MW with a high temporal resolution EEG, unclear aspects of MW, such as time series variation, are expected to be revealed. Furthermore, our suggestion that a few electrodes can also predict MW contributes to the development of neuro-feedback studies.

  6. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  7. Compressor Part I: Measurement and Design Modeling

    Directory of Open Access Journals (Sweden)

    Thomas W. Bein

    1999-01-01

    method used to design the 125-ton compressor is first reviewed and some related performance curves are predicted based on a quasi-3D method. In addition to an overall performance measurement, a series of instruments were installed on the compressor to identify where the measured performance differs from the predicted performance. The measurement techniques for providing the diagnostic flow parameters are also described briefly. Part II of this paper provides predictions of flow details in the areas of the compressor where there were differences between the measured and predicted performance.

  8. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  9. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  10. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  11. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  12. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    Science.gov (United States)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  13. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  14. Cross-national validation of prognostic models predicting sickness absence and the added value of work environment variables.

    Science.gov (United States)

    Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris

    2015-06-01

    To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.

  15. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  16. Predicting residential air exchange rates from questionnaires and meteorology: model evaluation in central North Carolina.

    Science.gov (United States)

    Breen, Michael S; Breen, Miyuki; Williams, Ronald W; Schultz, Bradley D

    2010-12-15

    A critical aspect of air pollution exposure models is the estimation of the air exchange rate (AER) of individual homes, where people spend most of their time. The AER, which is the airflow into and out of a building, is a primary mechanism for entry of outdoor air pollutants and removal of indoor source emissions. The mechanistic Lawrence Berkeley Laboratory (LBL) AER model was linked to a leakage area model to predict AER from questionnaires and meteorology. The LBL model was also extended to include natural ventilation (LBLX). Using literature-reported parameter values, AER predictions from LBL and LBLX models were compared to data from 642 daily AER measurements across 31 detached homes in central North Carolina, with corresponding questionnaires and meteorological observations. Data was collected on seven consecutive days during each of four consecutive seasons. For the individual model-predicted and measured AER, the median absolute difference was 43% (0.17 h(-1)) and 40% (0.17 h(-1)) for the LBL and LBLX models, respectively. Additionally, a literature-reported empirical scale factor (SF) AER model was evaluated, which showed a median absolute difference of 50% (0.25 h(-1)). The capability of the LBL, LBLX, and SF models could help reduce the AER uncertainty in air pollution exposure models used to develop exposure metrics for health studies.

  17. Output-Feedback Model Predictive Control of a Pasteurization Pilot Plant based on an LPV model

    Science.gov (United States)

    Karimi Pour, Fatemeh; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-01-01

    This paper presents a model predictive control (MPC) of a pasteurization pilot plant based on an LPV model. Since not all the states are measured, an observer is also designed, which allows implementing an output-feedback MPC scheme. However, the model of the plant is not completely observable when augmented with the disturbance models. In order to solve this problem, the following strategies are used: (i) the whole system is decoupled into two subsystems, (ii) an inner state-feedback controller is implemented into the MPC control scheme. A real-time example based on the pasteurization pilot plant is simulated as a case study for testing the behavior of the approaches.

  18. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  19. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  20. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  1. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    International Nuclear Information System (INIS)

    Lu, Yu; Wechsler, Risa H.; Somerville, Rachel S.; Croton, Darren; Porter, Lauren; Primack, Joel; Moody, Chris; Behroozi, Peter S.; Ferguson, Henry C.; Koo, David C.; Guo, Yicheng; Safarzadeh, Mohammadtaher; White, Catherine E.; Finlator, Kristian; Castellano, Marco; Sommariva, Veronica

    2014-01-01

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  2. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yu; Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Somerville, Rachel S. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122 (Australia); Porter, Lauren; Primack, Joel; Moody, Chris [Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Behroozi, Peter S.; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Koo, David C.; Guo, Yicheng [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Safarzadeh, Mohammadtaher; White, Catherine E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Finlator, Kristian [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Castellano, Marco; Sommariva, Veronica, E-mail: luyu@stanford.edu, E-mail: rwechsler@stanford.edu [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio (Italy)

    2014-11-10

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  3. Numerical prediction of rose growth

    NARCIS (Netherlands)

    Bernsen, E.; Bokhove, Onno; van der Sar, D.M.

    2006-01-01

    A new mathematical model is presented for the prediction of rose growth in a greenhouse. Given the measured ambient environmental conditions, the model consists of a local photosynthesis model, predicting the photosynthesis per unit leaf area, coupled to a global greenhouse model, which predicts the

  4. Models for Strength Prediction of High-Porosity Cast-In-Situ Foamed Concrete

    Directory of Open Access Journals (Sweden)

    Wenhui Zhao

    2018-01-01

    Full Text Available A study was undertaken to develop a prediction model of compressive strength for three types of high-porosity cast-in-situ foamed concrete (cement mix, cement-fly ash mix, and cement-sand mix with dry densities of less than 700 kg/m3. The model is an extension of Balshin’s model and takes into account the hydration ratio of the raw materials, in which the water/cement ratio was a constant for the entire construction period for a certain casting density. The results show that the measured porosity is slightly lower than the theoretical porosity due to few inaccessible pores. The compressive strength increases exponentially with the increase in the ratio of the dry density to the solid density and increases with the curing time following the composite function A2ln⁡tB2 for all three types of foamed concrete. Based on the results that the compressive strength changes with the porosity and the curing time, a prediction model taking into account the mix constitution, curing time, and porosity is developed. A simple prediction model is put forward when no experimental data are available.

  5. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  6. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Measure of functional independence dominates discharge outcome prediction after inpatient rehabilitation for stroke.

    Science.gov (United States)

    Brown, Allen W; Therneau, Terry M; Schultz, Billie A; Niewczyk, Paulette M; Granger, Carl V

    2015-04-01

    Identifying clinical data acquired at inpatient rehabilitation admission for stroke that accurately predict key outcomes at discharge could inform the development of customized plans of care to achieve favorable outcomes. The purpose of this analysis was to use a large comprehensive national data set to consider a wide range of clinical elements known at admission to identify those that predict key outcomes at rehabilitation discharge. Sample data were obtained from the Uniform Data System for Medical Rehabilitation data set with the diagnosis of stroke for the years 2005 through 2007. This data set includes demographic, administrative, and medical variables collected at admission and discharge and uses the FIM (functional independence measure) instrument to assess functional independence. Primary outcomes of interest were functional independence measure gain, length of stay, and discharge to home. The sample included 148,367 people (75% white; mean age, 70.6±13.1 years; 97% with ischemic stroke) admitted to inpatient rehabilitation a mean of 8.2±12 days after symptom onset. The total functional independence measure score, the functional independence measure motor subscore, and the case-mix group were equally the strongest predictors for any of the primary outcomes. The most clinically relevant 3-variable model used the functional independence measure motor subscore, age, and walking distance at admission (r(2)=0.107). No important additional effect for any other variable was detected when added to this model. This analysis shows that a measure of functional independence in motor performance and age at rehabilitation hospital admission for stroke are predominant predictors of outcome at discharge in a uniquely large US national data set. © 2015 American Heart Association, Inc.

  8. Predicting long-term organic carbon dynamics in organically amended soils using the CQESTR model

    Energy Technology Data Exchange (ETDEWEB)

    Plaza, Cesar; Polo, Alfredo [Consejo Superior de Investigaciones Cientificas, Madrid (Spain). Inst. de Ciencias Agrarias; Gollany, Hero T. [Columbia Plateau Conservation Research Center, Pendleton, OR (United States). USDA-ARS; Baldoni, Guido; Ciavatta, Claudio [Bologna Univ. (Italy). Dept. of Agroenvironmental Sciences and Technologies

    2012-04-15

    Purpose: The CQESTR model is a process-based C model recently developed to simulate soil organic matter (SOM) dynamics and uses readily available or easily measurable input parameters. The current version of CQESTR (v. 2.0) has been validated successfully with a number of datasets from agricultural sites in North America but still needs to be tested in other geographic areas and soil types under diverse organic management systems. Materials and methods: We evaluated the predictive performance of CQESTR to simulate long-term (34 years) soil organic C (SOC) changes in a SOM-depleted European soil either unamended or amended with solid manure, liquid manure, or crop residue. Results and discussion: Measured SOC levels declined over the study period in the unamended soil, remained constant in the soil amended with crop residues, and tended to increase in the soils amended with manure, especially with solid manure. Linear regression analysis of measured SOC contents and CQESTR predictions resulted in a correlation coefficient of 0.626 (P < 0.001) and a slope and an intercept not significantly different from 1 and 0, respectively (95% confidence level). The mean squared deviation and root mean square error were relatively small. Simulated values fell within the 95% confidence interval of the measured SOC, and predicted errors were mainly associated with data scattering. Conclusions: The CQESTR model was shown to predict, with a reasonable degree of accuracy, the organic C dynamics in the soils examined. The CQESTR performance, however, could be improved by adding an additional parameter to differentiate between pre-decomposed organic amendments with varying degrees of stability. (orig.)

  9. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  10. Verification of a 1-dimensional model for predicting shallow infiltration at Yucca Mountain

    International Nuclear Information System (INIS)

    Hevesi, J.A.; Flint, A.L.; Flint, L.E.

    1994-01-01

    A characterization of net infiltration rates is needed for site-scale evaluation of groundwater flow at Yucca Mountain, Nevada. Shallow infiltration caused by precipitation may be a potential source of net infiltration. A 1-dimensional finite difference model of shallow infiltration with a moisture-dependant evapotranspiration function and a hypothetical root-zone was calibrated and verified using measured water content profiles, measured precipitation, and estimated potential evapotranspiration. Monthly water content profiles obtained from January 1990 through October 1993 were measured by geophysical logging of 3 boreholes located in the alluvium channel of Pagany Wash on Yucca Mountain. The profiles indicated seasonal wetting and drying of the alluvium in response to winter season precipitation and summer season evapotranspiration above a depth of 2.5 meters. A gradual drying trend below a depth of 2.5 meters was interpreted as long-term redistribution and/or evapotranspiration following a deep infiltration event caused by runoff in Pagany Wash during 1984. An initial model, calibrated using the 1990 to 1 992 record, did not provide a satisfactory prediction of water content profiles measured in 1993 following a relatively wet winter season. A re-calibrated model using a modified, seasonally-dependent evapotranspiration function provided an improved fit to the total record. The new model provided a satisfactory verification using water content changes measured at a distance of 6 meters from the calibration site, but was less satisfactory in predicting changes at a distance of 18 meters

  11. Verification of a 1-dimensional model for predicting shallow infiltration at Yucca Mountain

    International Nuclear Information System (INIS)

    Hevesi, J.; Flint, A.L.; Flint, L.E.

    1994-01-01

    A characterization of net infiltration rates is needed for site-scale evaluation of groundwater flow at Yucca Mountain, Nevada. Shallow infiltration caused by precipitation may be a potential source of net infiltration. A 1-dimensional finite difference model of shallow infiltration with a moisture-dependent evapotranspiration function and a hypothetical root-zone was calibrated and verified using measured water content profiles, measured precipitation, and estimated potential evapotranspiration. Monthly water content profiles obtained from January 1990 through October 1993 were measured by geophysical logging of 3 boreholes located in the alluvium channel of Pagany Wash on Yucca Mountain. The profiles indicated seasonal wetting and drying of the alluvium in response to winter season precipitation and summer season evapotranspiration above a depth of 2.5 meters. A gradual drying trend below a depth of 2.5 meters was interpreted as long-term redistribution and/or evapotranspiration following a deep infiltration event caused by runoff in Pagany Wash during 1984. An initial model, calibrated using the 1990 to 1992 record, did not provide a satisfactory prediction of water content profiles measured in 1993 following a relatively wet winter season. A re-calibrated model using a modified, seasonally-dependent evapotranspiration function provided an improved fit to the total record. The new model provided a satisfactory verification using water content changes measured at a distance of 6 meters from the calibration site, but was less satisfactory in predicting changes at a distance of 18 meters

  12. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive

  13. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  14. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    International Nuclear Information System (INIS)

    Jošt, D; Škerlavaj, A; Lipej, A

    2012-01-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  15. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  16. Predicting ambient aerosol thermal-optical reflectance (TOR) measurements from infrared spectra: extending the predictions to different years and different sites

    Science.gov (United States)

    Reggente, Matteo; Dillner, Ann M.; Takahama, Satoshi

    2016-02-01

    Organic carbon (OC) and elemental carbon (EC) are major components of atmospheric particulate matter (PM), which has been associated with increased morbidity and mortality, climate change, and reduced visibility. Typically OC and EC concentrations are measured using thermal-optical methods such as thermal-optical reflectance (TOR) from samples collected on quartz filters. In this work, we estimate TOR OC and EC using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE Teflon) filters using partial least square regression (PLSR) calibrated to TOR OC and EC measurements for a wide range of samples. The proposed method can be integrated with analysis of routinely collected PTFE filter samples that, in addition to OC and EC concentrations, can concurrently provide information regarding the functional group composition of the organic aerosol. We have used the FT-IR absorbance spectra and TOR OC and EC concentrations collected in the Interagency Monitoring of PROtected Visual Environments (IMPROVE) network (USA). We used 526 samples collected in 2011 at seven sites to calibrate the models, and more than 2000 samples collected in 2013 at 17 sites to test the models. Samples from six sites are present both in the calibration and test sets. The calibrations produce accurate predictions both for samples collected at the same six sites present in the calibration set (R2 = 0.97 and R2 = 0.95 for OC and EC respectively), and for samples from 9 of the 11 sites not included in the calibration set (R2 = 0.96 and R2 = 0.91 for OC and EC respectively). Samples collected at the other two sites require a different calibration model to achieve accurate predictions. We also propose a method to anticipate the prediction error; we calculate the squared Mahalanobis distance in the feature space (scores determined by PLSR) between new spectra and spectra in the calibration set. The squared Mahalanobis distance provides a crude method for assessing the

  17. Predictive modeling of nanoscale domain morphology in solution-processed organic thin films

    Science.gov (United States)

    Schaaf, Cyrus; Jenkins, Michael; Morehouse, Robell; Stanfield, Dane; McDowall, Stephen; Johnson, Brad L.; Patrick, David L.

    2017-09-01

    The electronic and optoelectronic properties of molecular semiconductor thin films are directly linked to their extrinsic nanoscale structural characteristics such as domain size and spatial distributions. In films prepared by common solution-phase deposition techniques such as spin casting and solvent-based printing, morphology is governed by a complex interrelated set of thermodynamic and kinetic factors that classical models fail to adequately capture, leaving them unable to provide much insight, let alone predictive design guidance for tailoring films with specific nanostructural characteristics. Here we introduce a comprehensive treatment of solution-based film formation enabling quantitative prediction of domain formation rates, coverage, and spacing statistics based on a small number of experimentally measureable parameters. The model combines a mean-field rate equation treatment of monomer aggregation kinetics with classical nucleation theory and a supersaturation-dependent critical nucleus size to solve for the quasi-two-dimensional temporally and spatially varying monomer concentration, nucleation rate, and other properties. Excellent agreement is observed with measured nucleation densities and interdomain radial distribution functions in polycrystalline tetracene films. Numerical solutions lead to a set of general design rules enabling predictive morphological control in solution-processed molecular crystalline films.

  18. Inverse modeling with RZWQM2 to predict water quality

    Science.gov (United States)

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  19. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    Science.gov (United States)

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.