WorldWideScience

Sample records for model predictions measurements

  1. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  2. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  3. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...

  4. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined......, we simplify state prediction for the MPC. Consequently, the control problem of the nonlinear system is simplified into a quadratic programming. We consider uncertainty in the wind propagation time, which is the traveling time of wind from the LIDAR measurement point to the rotor. An algorithm based...... by the effective wind speed on the rotor disc. We take the wind speed as a scheduling variable. The wind speed is measurable ahead of the turbine using LIDARs, therefore, the scheduling variable is known for the entire prediction horizon. By taking the advantage of having future values of the scheduling variable...

  5. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  6. Measurements and IRI Model Predictions During the Recent Solar Minimum

    Science.gov (United States)

    Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.

    2012-01-01

    Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.

  7. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  8. Assessing the performance of prediction models: a framework for traditional and novel measures

    DEFF Research Database (Denmark)

    Steyerberg, Ewout W; Vickers, Andrew J; Cook, Nancy R

    2010-01-01

    (NRI), and integrated discrimination improvement (IDI). Moreover, decision-analytic measures have been proposed, including decision curves to plot the net benefit achieved by making decisions based on model predictions.We aimed to define the role of these relatively novel approaches in the evaluation...... be important for a prediction model. Decision-analytic measures should be reported if the predictive model is to be used for clinical decisions. Other measures of performance may be warranted in specific applications, such as reclassification metrics to gain insight into the value of adding a novel predictor...... of the performance of prediction models. For illustration, we present a case study of predicting the presence of residual tumor versus benign tissue in patients with testicular cancer (n = 544 for model development, n = 273 for external validation).We suggest that reporting discrimination and calibration will always...

  9. Ion current prediction model considering columnar recombination in alpha radioactivity measurement using ionized air transportation

    International Nuclear Information System (INIS)

    Naito, Susumu; Hirata, Yosuke; Izumi, Mikio; Sano, Akira; Miyamoto, Yasuaki; Aoyama, Yoshio; Yamaguchi, Hiromi

    2007-01-01

    We present a reinforced ion current prediction model in alpha radioactivity measurement using ionized air transportation. Although our previous model explained the qualitative trend of the measured ion current values, the absolute values of the theoretical curves were about two times as large as the measured values. In order to accurately predict the measured values, we reinforced our model by considering columnar recombination and turbulent diffusion, which affects columnar recombination. Our new model explained the considerable ion loss in the early stage of ion diffusion and narrowed the gap between the theoretical and measured values. The model also predicted suppression of ion loss due to columnar recombination by spraying a high-speed air flow near a contaminated surface. This suppression was experimentally investigated and confirmed. In conclusion, we quantitatively clarified the theoretical relation between alpha radioactivity and ion current in laminar flow and turbulent pipe flow. (author)

  10. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  12. Review and evaluation of performance measures for survival prediction models in external validation settings.

    Science.gov (United States)

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  13. Wideband impedance measurements and modeling of DC motors for EMI predictions

    NARCIS (Netherlands)

    Diouf, F.; Leferink, Frank Bernardus Johannes; Duval, Fabrice; Bensetti, Mohamed

    2015-01-01

    In electromagnetic interference prediction, dc motors are usually modeled as a source and a series impedance. Previous researches only include the impedance of the armature, while neglecting the effect of the motor's rotation. This paper aims at measuring and modeling the wideband impedance of a dc

  14. Assessing the performance of prediction models: A framework for traditional and novel measures

    NARCIS (Netherlands)

    E.W. Steyerberg (Ewout); A.J. Vickers (Andrew); N.R. Cook (Nancy); T.A. Gerds (Thomas); M. Gonen (Mithat); N. Obuchowski (Nancy); M. Pencina (Michael); M.W. Kattan (Michael)

    2010-01-01

    textabstractThe performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the

  15. Measurement and modelling of noise emission of road vehicles for use in prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Jonasson, H.G.

    2000-07-01

    The road vehicle as sound source has been studied within a wide frequency range. Well defined measurements have been carried out on moving and stationary vehicles. Measurement results have been checked against theoretical simulations. A Nordtest measurement method to obtain input data for prediction methods has been proposed and tested in four different countries. The effective sound source of a car has its centre close to the nearest wheels. For trucks this centre seems to be closer to the centre of the car. The vehicle as sound source is directional both in the vertical and the horizontal plane. The difference between SEL and L{sub pFmax} during a pass-by varies with frequency. At low frequencies interference effects between correlated sources may be the problem. At high frequencies the directivity of tyre/road noise affects the result. The time when L{sub pFmax} is obtained varies with frequency. Thus traditional maximum measurements are not suitable for frequency band applications. The measurements support the fact that the tyre/road noise source is very low. Measurements on a stationary vehicle indicate that the engine source is also very low. Engine noise is screened by the body of the car. The ground attenuation, also at short distances, will be significant whenever we use low microphone positions and have some 'soft' ground in between. Unless all measurements are restricted to propagation over 'hard' surfaces only it is necessary to use rather high microphone positions. The Nordtest method proposed will yield a reproducibility standard deviation of 1-3 dB depending on frequency. High frequencies are more accurate. In order to get accurate results at low frequencies large numbers of vehicles are required. To determine the sound power level from pass-by measurement requires a proper source and propagation model. As these models may change it is recommended to measure and report both SEL and L{sub pFmax} normalized to a specified distance.

  16. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  17. MEASURED CONCENTRATIONS OF HERBICIDES AND MODEL PREDICTIONS OF ATRAZINE FATE IN THE PATUXENT RIVER ESTUARY

    Science.gov (United States)

    McConnell, Laura L., Jennifer A. Harman-Fetcho and James D. Hagy, III. 2004. Measured Concentrations of Herbicides and Model Predictions of Atrazine Fate in the Patuxent River Estuary. J. Environ. Qual. 33(2):594-604. (ERL,GB X1051). The environmental fate of herbicides i...

  18. Prediction impact curve is a new measure integrating intervention effects in the evaluation of risk models.

    Science.gov (United States)

    Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W

    2016-01-01

    We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Predictions and measurements of isothermal airflow in a model once-through steam generator

    Energy Technology Data Exchange (ETDEWEB)

    Carter, H R; Promey, G J; Rush, G C

    1982-11-01

    Once-Through Steam Generators (OTSGs) are used in the Nuclear Steam Supply Systems marketed by The Babcock and Wilcox Company (B and W). To analytically predict the three-dimensional, steady-state thermohydraulic conditions in the OTSG, B and W has developed a proprietary code THEDA-1 and is working in cooperation with EPRI to develop an improved version, THEDA-2. Confident application of THEDA requires experimental verification to demonstrate that the code can accurately describe the thermohydraulic conditions in geometries characteristic of the OTSG. The first step in the THEDA verification process is the subject of this report. A full-scale, partial-section model of two OTSG spans was constructed and tested using isothermal air as the working fluid. Model local velocities and pressure profiles were measured and compared to THEDA prediction for five model configurations. Over 3000 velocity measurements were taken and the results were compared to THEDA predictions. Agreement between measured and predicted velocity data was generally better than +-12.5%.

  20. Comparison of Echo 7 field line length measurements to magnetospheric model predictions

    International Nuclear Information System (INIS)

    Nemzek, R.J.; Winckler, J.R.; Malcolm, P.R.

    1992-01-01

    The Echo 7 sounding rocket experiment injected electron beams on central tail field lines near L = 6.5. Numerous injections returned to the payload as conjugate echoes after mirroring in the southern hemisphere. The authors compare field line lengths calculated from measured conjugate echo bounce times and energies to predictions made by integrating electron trajectories through various magnetospheric models: the Olson-Pfitzer Quiet and Dynamic models and the Tsyganenko-Usmanov model. Although Kp at launch was 3-, quiet time magnetic models est fit the echo measurements. Geosynchronous satellite magnetometer measurements near the Echo 7 field lies during the flight were best modeled by the Olson-Pfitzer Dynamic Model and the Tsyganenko-Usmanov model for Kp = 3. The discrepancy between the models that best fit the Echo 7 data and those that fit the satellite data was most likely due to uncertainties in the small-scale configuration of the magnetospheric models. The field line length measured by the conjugate echoes showed some temporal variation in the magnetic field, also indicated by the satellite magnetometers. This demonstrates the utility an Echo-style experiment could have in substorm studies

  1. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    OpenAIRE

    Jae Young Lee; Lynn M. Hildemann

    2013-01-01

    Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water) and glutaric acid solutions (0–5 mol/kg water) agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%). However, for solutions containing mixtures of malonic/g...

  2. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  3. Comparisons Between Model Predictions and Spectral Measurements of Charged and Neutral Particles on the Martian Surface

    Science.gov (United States)

    Kim, Myung-Hee Y.; Cucinotta, Francis A.; Zeitlin, Cary; Hassler, Donald M.; Ehresmann, Bent; Rafkin, Scot C. R.; Wimmer-Schweingruber, Robert F.; Boettcher, Stephan; Boehm, Eckart; Guo, Jingnan; hide

    2014-01-01

    Detailed measurements of the energetic particle radiation environment on the surface of Mars have been made by the Radiation Assessment Detector (RAD) on the Curiosity rover since August 2012. RAD is a particle detector that measures the energy spectrum of charged particles (10 to approx. 200 MeV/u) and high energy neutrons (approx 8 to 200 MeV). The data obtained on the surface of Mars for 300 sols are compared to the simulation results using the Badhwar-O'Neill galactic cosmic ray (GCR) environment model and the high-charge and energy transport (HZETRN) code. For the nuclear interactions of primary GCR through Mars atmosphere and Curiosity rover, the quantum multiple scattering theory of nuclear fragmentation (QMSFRG) is used. For describing the daily column depth of atmosphere, daily atmospheric pressure measurements at Gale Crater by the MSL Rover Environmental Monitoring Station (REMS) are implemented into transport calculations. Particle flux at RAD after traversing varying depths of atmosphere depends on the slant angles, and the model accounts for shielding of the RAD "E" dosimetry detector by the rest of the instrument. Detailed comparisons between model predictions and spectral data of various particle types provide the validation of radiation transport models, and suggest that future radiation environments on Mars can be predicted accurately. These contributions lend support to the understanding of radiation health risks to astronauts for the planning of various mission scenarios

  4. Towards Systematic Prediction of Urban Heat Islands: Grounding Measurements, Assessing Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Jackson Voelkel

    2017-06-01

    Full Text Available While there exists extensive assessment of urban heat, we observe myriad methods for describing thermal distribution, factors that mediate temperatures, and potential impacts on urban populations. In addition, the limited spatial and temporal resolution of satellite-derived heat measurements may limit the capacity of decision makers to take effective actions for reducing mortalities in vulnerable populations whose locations require highly-refined measurements. Needed are high resolution spatial and temporal information for urban heat. In this study, we ask three questions: (1 how do urban heat islands vary throughout the day? (2 what statistical methods best explain the presence of temperatures at sub-meter spatial scales; and (3 what landscape features help to explain variation in urban heat islands? Using vehicle-based temperature measurements at three periods of the day in the Pacific Northwest city of Portland, Oregon (USA, we incorporate LiDAR-derived datasets, and evaluate three statistical techniques for modeling and predicting variation in temperatures during a heat wave. Our results indicate that the random forest technique best predicts temperatures, and that the evening model best explains the variation in temperature. The results suggest that ground-based measurements provide high levels of accuracy for describing the distribution of urban heat, its temporal variation, and specific locations where targeted interventions with communities can reduce mortalities from heat events.

  5. Combining Satellite Measurements and Numerical Flood Prediction Models to Save Lives and Property from Flooding

    Science.gov (United States)

    Saleh, F.; Garambois, P. A.; Biancamaria, S.

    2017-12-01

    Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.

  6. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  7. Distribution of Organophosphate Esters between the Gas and Particle Phase-Model Predictions vs Measured Data.

    Science.gov (United States)

    Sühring, Roxana; Wolschke, Hendrik; Diamond, Miriam L; Jantunen, Liisa M; Scheringer, Martin

    2016-07-05

    Gas-particle partitioning is one of the key factors that affect the environmental fate of semivolatile organic chemicals. Many organophosphate esters (OPEs) have been reported to primarily partition to particles in the atmosphere. However, because of the wide range of their physicochemical properties, it is unlikely that OPEs are mainly in the particle phase "as a class". We compared gas-particle partitioning predictions for 32 OPEs made by the commonly used OECD POV and LRTP Screening Tool ("the Tool") with the partitioning models of Junge-Pankow (J-P) and Harner-Bidleman (H-B), as well as recently measured data on OPE gas-particle partitioning. The results indicate that half of the tested OPEs partition into the gas phase. Partitioning into the gas phase seems to be determined by an octanol-air partition coefficient (log KOA) -5 (PL in Pa), as well as the total suspended particle concentration (TSP) in the sampling area. The uncertainty of the physicochemical property data of the OPEs did not change this estimate. Furthermore, the predictions by the Tool, J-P- and H-B-models agreed with recently measured OPE gas-particle partitioning.

  8. PIV-measured versus CFD-predicted flow dynamics in anatomically realistic cerebral aneurysm models.

    Science.gov (United States)

    Ford, Matthew D; Nikolov, Hristo N; Milner, Jaques S; Lownie, Stephen P; Demont, Edwin M; Kalata, Wojciech; Loth, Francis; Holdsworth, David W; Steinman, David A

    2008-04-01

    Computational fluid dynamics (CFD) modeling of nominally patient-specific cerebral aneurysms is increasingly being used as a research tool to further understand the development, prognosis, and treatment of brain aneurysms. We have previously developed virtual angiography to indirectly validate CFD-predicted gross flow dynamics against the routinely acquired digital subtraction angiograms. Toward a more direct validation, here we compare detailed, CFD-predicted velocity fields against those measured using particle imaging velocimetry (PIV). Two anatomically realistic flow-through phantoms, one a giant internal carotid artery (ICA) aneurysm and the other a basilar artery (BA) tip aneurysm, were constructed of a clear silicone elastomer. The phantoms were placed within a computer-controlled flow loop, programed with representative flow rate waveforms. PIV images were collected on several anterior-posterior (AP) and lateral (LAT) planes. CFD simulations were then carried out using a well-validated, in-house solver, based on micro-CT reconstructions of the geometries of the flow-through phantoms and inlet/outlet boundary conditions derived from flow rates measured during the PIV experiments. PIV and CFD results from the central AP plane of the ICA aneurysm showed a large stable vortex throughout the cardiac cycle. Complex vortex dynamics, captured by PIV and CFD, persisted throughout the cardiac cycle on the central LAT plane. Velocity vector fields showed good overall agreement. For the BA, aneurysm agreement was more compelling, with both PIV and CFD similarly resolving the dynamics of counter-rotating vortices on both AP and LAT planes. Despite the imposition of periodic flow boundary conditions for the CFD simulations, cycle-to-cycle fluctuations were evident in the BA aneurysm simulations, which agreed well, in terms of both amplitudes and spatial distributions, with cycle-to-cycle fluctuations measured by PIV in the same geometry. The overall good agreement

  9. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    Science.gov (United States)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  10. Metabolomics based predictive biomarker model of ARDS: A systemic measure of clinical hypoxemia.

    Directory of Open Access Journals (Sweden)

    Neeraj Sinha

    Full Text Available Despite advancements in ventilator technologies, lung supportive and rescue therapies, the outcome and prognostication in acute respiratory distress syndrome (ARDS remains incremental and ambiguous. Metabolomics is a potential insightful measure to the diagnostic approaches practiced in critical disease settings. In our study patients diagnosed with mild and moderate/severe ARDS clinically governed by hypoxemic P/F ratio between 100-300 but with indistinct molecular phenotype were discriminated employing nuclear magnetic resonance (NMR based metabolomics of mini bronchoalveolar lavage fluid (mBALF. Resulting biomarker prototype comprising six metabolites was substantiated highlighting ARDS susceptibility/recovery. Both the groups (mild and moderate/severe ARDS showed distinct biochemical profile based on 83.3% classification by discriminant function analysis and cross validated accuracy of 91% using partial least squares discriminant analysis as major classifier. The predictive performance of narrowed down six metabolites were found analogous with chemometrics. The proposed biomarker model consisting of six metabolites proline, lysine/arginine, taurine, threonine and glutamate were found characteristic of ARDS sub-stages with aberrant metabolism observed mainly in arginine, proline metabolism, lysine synthesis and so forth correlating to diseased metabotype. Thus NMR based metabolomics has provided new insight into ARDS sub-stages and conclusively a precise biomarker model proposed, reflecting underlying metabolic dysfunction aiding prior clinical decision making.

  11. Electrochemical measurements and modeling predictions in boiling water reactors under various operating conditions

    International Nuclear Information System (INIS)

    Indig, M.E.

    1991-01-01

    One important issue for providing life extension to operating boiling water nuclear reactors (BWRs) is the control of stress corrosion cracking in all sections of the primary coolant circuit. This paper links experimental and theoretical methods that provide understanding and measurements of the critical parameter, the electrochemical potential (ECP), and its application to determining crack growth rate among and within the family of BWRs. Measurement of in-core ECP required the development of a new family of radiation-resistant sensors. With these sensors, ECPs were measured in the core and piping of two operating BWRs. Concurrent crack growth measurements were used to benchmark a crack growth prediction algorithm with measured ECPs

  12. Predicting vehicular emissions in high spatial resolution using pervasively measured transportation data and microscopic emissions model

    Energy Technology Data Exchange (ETDEWEB)

    Nyhan, Marguerite; Sobolevsky, Stanislav; Kang, Chaogui; Robinson, Prudence; Corti, Andrea; Szell, Michael; Streets, David; Lu, Zifeng; Britter, Rex; Barrett, Steven R. H.; Ratti, Carlo

    2016-06-07

    Air pollution related to traffic emissions pose an especially significant problem in cities; this is due to its adverse impact on human health and well-being. Previous studies which have aimed to quantify emissions from the transportation sector have been limited by either simulated or coarsely resolved traffic volume data. Emissions inventories form the basis of urban pollution models, therefore in this study, Global Positioning System (GPS) trajectory data from a taxi fleet of over 15,000 vehicles were analyzed with the aim of predicting air pollution emissions for Singapore. This novel approach enabled the quantification of instantaneous drive cycle parameters in high spatio-temporal resolution, which provided the basis for a microscopic emissions model. Carbon dioxide (CO2), nitrogen oxides (NOx), volatile organic compounds (VOCs) and particulate matter (PM) emissions were thus estimated. Highly localized areas of elevated emissions levels were identified, with a spatio-temporal precision not possible with previously used methods for estimating emissions. Relatively higher emissions areas were mainly concentrated in a few districts that were the Singapore Downtown Core area, to the north of the central urban region and to the east of it. Daily emissions quantified for the total motor vehicle population of Singapore were found to be comparable to another emissions dataset Results demonstrated that high resolution spatio-temporal vehicle traces detected using GPS in large taxi fleets could be used to infer highly localized areas of elevated acceleration and air pollution emissions in cities, and may become a complement to traditional emission estimates, especially in emerging cities and countries where reliable fine-grained urban air quality data is not easily available. This is the first study of its kind to investigate measured microscopic vehicle movement in tandem with microscopic emissions modeling for a substantial study domain.

  13. Predicting vehicular emissions in high spatial resolution using pervasively measured transportation data and microscopic emissions model

    Science.gov (United States)

    Nyhan, Marguerite; Sobolevsky, Stanislav; Kang, Chaogui; Robinson, Prudence; Corti, Andrea; Szell, Michael; Streets, David; Lu, Zifeng; Britter, Rex; Barrett, Steven R. H.; Ratti, Carlo

    2016-09-01

    Air pollution related to traffic emissions pose an especially significant problem in cities; this is due to its adverse impact on human health and well-being. Previous studies which have aimed to quantify emissions from the transportation sector have been limited by either simulated or coarsely resolved traffic volume data. Emissions inventories form the basis of urban pollution models, therefore in this study, Global Positioning System (GPS) trajectory data from a taxi fleet of over 15,000 vehicles were analyzed with the aim of predicting air pollution emissions for Singapore. This novel approach enabled the quantification of instantaneous drive cycle parameters in high spatio-temporal resolution, which provided the basis for a microscopic emissions model. Carbon dioxide (CO2), nitrogen oxides (NOx), volatile organic compounds (VOCs) and particulate matter (PM) emissions were thus estimated. Highly localized areas of elevated emissions levels were identified, with a spatio-temporal precision not possible with previously used methods for estimating emissions. Relatively higher emissions areas were mainly concentrated in a few districts that were the Singapore Downtown Core area, to the north of the central urban region and to the east of it. Daily emissions quantified for the total motor vehicle population of Singapore were found to be comparable to another emissions dataset. Results demonstrated that high-resolution spatio-temporal vehicle traces detected using GPS in large taxi fleets could be used to infer highly localized areas of elevated acceleration and air pollution emissions in cities, and may become a complement to traditional emission estimates, especially in emerging cities and countries where reliable fine-grained urban air quality data is not easily available. This is the first study of its kind to investigate measured microscopic vehicle movement in tandem with microscopic emissions modeling for a substantial study domain.

  14. The influence of profiled ceilings on sports hall acoustics : Ground effect predictions and scale model measurements

    NARCIS (Netherlands)

    Wattez, Y.C.M.; Tenpierik, M.J.; Nijs, L.

    2018-01-01

    Over the last few years, reverberation times and sound pressure levels have been measured in many sports halls. Most of these halls, for instance those made from stony materials, perform as predicted. However, sports halls constructed with profiled perforated steel roof panels have an unexpected

  15. Comparative Study of foF2 Measurements with IRI-2007 Model Predictions During Extended Solar Minimum

    Science.gov (United States)

    Zakharenkova, I. E.; Krankowski, A.; Bilitza, D.; Cherniak, Iu.V.; Shagimuratov, I.I.; Sieradzki, R.

    2013-01-01

    The unusually deep and extended solar minimum of cycle 2324 made it very difficult to predict the solar indices 1 or 2 years into the future. Most of the predictions were proven wrong by the actual observed indices. IRI gets its solar, magnetic, and ionospheric indices from an indices file that is updated twice a year. In recent years, due to the unusual solar minimum, predictions had to be corrected downward with every new indices update. In this paper we analyse how much the uncertainties in the predictability of solar activity indices affect the IRI outcome and how the IRI values calculated with predicted and observed indices compared to the actual measurements.Monthly median values of F2 layer critical frequency (foF2) derived from the ionosonde measurements at the mid-latitude ionospheric station Juliusruh were compared with the International Reference Ionosphere (IRI-2007) model predictions. The analysis found that IRIprovides reliable results that compare well with actual measurements, when the definite (observed and adjusted) indices of solar activityare used, while IRI values based on earlier predictions of these indices noticeably overestimated the measurements during the solar minimum.One of the principal objectives of this paper is to direct attention of IRI users to update their solar activity indices files regularly.Use of an older index file can lead to serious IRI overestimations of F-region electron density during the recent extended solar minimum.

  16. In reactor measurements, modeling and assessments to predict liquid injection shutdown system nozzle to Calandria tube time to contact

    International Nuclear Information System (INIS)

    Kirstein, K.; Kalenchuk, D.

    2011-01-01

    Over the past few years there has been an expanding effort to assess the potential for Calandria Tubes (CTs) coming into contact with Liquid Injection Shutdown System (LISS) Nozzles to ensure continued contact-free operation as required by CSA N285.4. LISS Nozzles (LINs), which run perpendicular to and between rows of fuel channels, sag at a slower rate than the fuel channels. As a result certain LINs may come in contact with CTs above them. The CT/LIN gaps can be predicted from calculated CT sag, LIN sag and a number of component and installation tolerances. This method however results in very conservative predictions when compared to measurements, confirmed with the in reactor measurements initiated in 2000, when gaps were successfully measured the first time using images obtained from a camera-assisted measurement tool inserted into the calandria. To reduce the conservatism of the CT/LIN gap predictions, statistical CT/LIN gap models are used instead. They are derived from a comparison between calculated gaps based on nominal dimensions and the visual image based measured gaps. These reactor specific (typically 95% confidence level) CT/LIN gap models account for all uncertainties and deviations from nominal values. Prediction error margins reduce as more in-reactor gap measurements become available. Each year more measurements are being made using this standardized visual CT/LIN proximity method. The subsequently prepared reactor-specific models have been used to provide time to contact for every channel above the LINs at these stations. In a number of cases it has been used to demonstrate that the reactor can be operated to its end of life before refurbishment with no predicted contact, or specific at-risk channels have been identified for which appropriate remedial actions could be implemented in a planned manner. (author)

  17. Evaluation of markers and risk prediction models: overview of relationships between NRI and decision-analytic measures.

    Science.gov (United States)

    Van Calster, Ben; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W

    2013-05-01

    For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the receiver operating characteristic (ROC) curve (ΔAUC). The net reclassification improvement (NRI) is increasingly popular to compare predictions with 1 or more risk thresholds, but decision-analytic approaches have also been proposed. . We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. . We considered the NRI and 3 utility-based measures that take misclassification costs into account: difference in net benefit (ΔNB), difference in relative utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of having ovarian cancer (prevalence 28%). . The 3 utility-based measures appear to be transformations of each other and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P, and the obtained differences in sensitivity and specificity of the 2 models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. . The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures.

  18. Multimodal Imaging Measures Predict Rearrest

    Directory of Open Access Journals (Sweden)

    Vaughn R Steele

    2015-08-01

    Full Text Available Rearrest has been predicted by hemodynamic activity in the anterior cingulate cortex (ACC during error-processing (Aharoni et al., 2013. Here we evaluate the predictive power after adding an additional imaging modality in a subsample of 45 incarcerated males from Aharoni et al. Event-related potentials (ERPs and hemodynamic activity were collected during a Go/NoGo response inhibition task. Neural measures of error-processing were obtained from the ACC and two ERP components, the error-related negativity (ERN/Ne and the error positivity (Pe. Measures from the Pe and ACC differentiated individuals who were and were not subsequently rearrested. Cox regression, logistic regression, and support vector machine (SVM neuroprediction models were calculated. Each of these models proved successful in predicting rearrest and SVM provided the strongest results. Multimodal neuroprediction SVM models with out of sample cross-validating accurately predicted rearrest (83.33%. Offenders with increased Pe amplitude and decreased ACC activation, suggesting abnormal error-processing, were at greatest risk of rearrest.

  19. (n,p) and (n,alpha) measurements using LENZ instrument to improve reaction model prediction

    Science.gov (United States)

    Lee, Hye Young; Devlin, Matthew; Haight, Robert; Manning, Brett; Mosby, Shea

    2015-10-01

    Understanding neutron-induced charged particle reactions is of interest for nuclear astrophysics and applied nuclear energy. Often, direct measurements of these reactions are not feasible at neutron beam facilities due to the short half-lives of the targets and the reduced cross sections at astrophysically relevant energies given the large Coulomb barriers. Instead, the Hauser-Feshbach formalism is used to study this reaction mechanism for predicting cross sections. We have developed the LENZ (Low Energy n,z) instrument to measure the (n,p) and (n, α) reactions using a time-of-flight method for incident neutron energies from thermal to several tens of MeV at LANSCE. The LENZ has improved capabilities including a large solid angle, a low detection threshold, and good signal-to-background ratios using waveform digitizers. We have performed an in-beam commissioning measurement on 59Co(n, α/p) at En = 0.7 - 12 MeV. In this paper, we will discuss the results of the 59Co(n, α/p) measurements and present the status of the reaction studies on 16O(n, α) for nuclear energy applications and 77Se(n,p) for reaction mechanism studies. This work is funded by the US Department of Energy-Los Alamos National Security, LLC under Contract DE-AC52-06NA25396.

  20. Construction of Models for Nondestructive Prediction of Ingredient Contents in Blueberries by Near-infrared Spectroscopy Based on HPLC Measurements.

    Science.gov (United States)

    Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao

    2016-06-28

    Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.

  1. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  2. Use of statistical models based on radiographic measurements to predict oviposition date and clutch size in rock iguanas (Cyclura nubila)

    International Nuclear Information System (INIS)

    Alberts, A.C.

    1995-01-01

    The ability to noninvasively estimate clutch size and predict oviposition date in reptiles can be useful not only to veterinary clinicians but also to managers of captive collections and field researchers. Measurements of egg size and shape, as well as position of the clutch within the coelomic cavity, were taken from diagnostic radiographs of 20 female Cuban rock iguanas, Cyclura nubila, 81 to 18 days prior to laying. Combined with data on maternal body size, these variables were entered into multiple regression models to predict clutch size and timing of egg laying. The model for clutch size was accurate to 0.53 ± 0.08 eggs, while the model for oviposition date was accurate to 6.22 ± 0.81 days. Equations were generated that should be applicable to this and other large Cyclura species. © 1995 Wiley-Liss, Inc

  3. Surrogate gas proxy prediction model for Delta 14C-based measurements of fossil fuel-CO2

    Science.gov (United States)

    Coakley, K. J.; Miller, J. B.; Montzka, S. A.; Sweeney, C.; Miller, B.

    2016-12-01

    The measured {}14}C {:12} {C isotopic ratio ofatmospheric CO2 (and its associated derived Δ 14Cvalue) is an ideal tracer for determination of the fossil fuelderived CO2 enhancement contributing to any atmosphericCO2 measurement (Cff). Given enough such measurements,independent top-down estimation of US fossil fuel- CO2emissions should be possible. However, the number of Δ 14Cmeasurements is presently constrained by cost, available samplevolume, and availability of mass spectrometer measurement facilities.Δ 14C is therefore measured in just a small fraction ofsamples obtained by flask air sampling networks around the world.Here, we develop a Projection Pursuit Regression model topredict Cff as a function of multiple surrogate gases acquiredwithin the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN).The surrogates consist of measured enhancements of various anthropogenictrace gases, including CO, SF6, and halo- andhydro-carbons acquired in vertical airborne sampling profiles nearCape May, NJ and Portsmouth, NH from 2005 through 2010. Modelperformance is quantified based on predicted values correspondingto test data excluded from the model building process. Chi-squarehypothesis test analysis indicates that these predictions andcorresponding observations are consistent given our uncertaintybudget which accounts for random effects and one particular systematiceffect. To account for the possibility of additional systematiceffects, we incorporate another component of uncertainty into ourbudget. Provided that these estimates are of comparable qualityto Δ 14C -based estimates, we expect an improved determinationof fossil fuel-CO2 emissions.

  4. Predictions of psychophysical measurements for sinusoidal amplitude modulated (SAM) pulse-train stimuli from a stochastic model.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2007-08-01

    Two approaches have been proposed to reduce the synchrony of the neural response to electrical stimuli in cochlear implants. One approach involves adding noise to the pulse-train stimulus, and the other is based on using a high-rate pulse-train carrier. Hypotheses regarding the efficacy of the two approaches can be tested using computational models of neural responsiveness prior to time-intensive psychophysical studies. In our previous work, we have used such models to examine the effects of noise on several psychophysical measures important to speech recognition. However, to date there has been no parallel analytic solution investigating the neural response to the high-rate pulse-train stimuli and their effect on psychophysical measures. This work investigates the properties of the neural response to high-rate pulse-train stimuli with amplitude modulated envelopes using a stochastic auditory nerve model. The statistics governing the neural response to each pulse are derived using a recursive method. The agreement between the theoretical predictions and model simulations is demonstrated for sinusoidal amplitude modulated (SAM) high rate pulse-train stimuli. With our approach, predicting the neural response in modern implant devices becomes tractable. Psychophysical measurements are also predicted using the stochastic auditory nerve model for SAM high-rate pulse-train stimuli. Changes in dynamic range (DR) and intensity discrimination are compared with that observed for noise-modulated pulse-train stimuli. Modulation frequency discrimination is also studied as a function of stimulus level and pulse rate. Results suggest that high rate carriers may positively impact such psychophysical measures.

  5. Correlative assessment of two predictive soil hydrology models with measured surface soil geochemistry

    Science.gov (United States)

    Filley, T. R.; Li, M.; Le, P. V.; Kumar, P.; Yan, Q.; Papanicolaou, T.; Hou, T.; Wang, J.

    2017-12-01

    Spatial variability of surface soil organic matter on the hill slope scale is strongly influenced by topographic variation, especially in sloping terrains, where the coupled effects of soil moisture and texture are principle drivers for stabilization and decomposition. Topographic wetness index (TWI) calculations have shown reasonable correlations with soil organic carbon (SOC) content at broad spatial scales. However, due to inherent limitations of the "depression filling" approach, traditional TWI methods are generally ineffectual at capturing how small-scale micro-topographic ( 1m2) variation controls water dynamics and, subsequently, poorly correlate to surface soil biogeochmical measures. For TWI models to capture biogeochmical controls at the scales made possible by LiDAR data they need to incoportate the dynamic connection between soil moisture, local climate, edaphic properties, and micro-topographic variability. We present the results of a study correlating surface soil geochemical data across field sites in the Upper Sangamon River Basin (USRB) in Central Illinois, USA with a range of land use types to SAGA TWI and a newly developed Dynamic Topographic Wetness Index (DTWI). The DTWI for all field sites were obtained from the probability distribution of long-term stochastically modeled soil moisture in between wilting point (WP) and field capacity (FC) using Dhara modeling framework. Whereas the SAGA TWI showed no correlation with soil geochemistry measures across the site-specific data, the DTWI, within a site, was strongly, positively correlated with soil nitrogen, organic carbon, and δ15N at three of the six sites and revealed controls potentially related to connectivity to local drainage paths. Overall, this study indicates that soil moisture derived by DTWI may offer a significant improvement in generating estimates in long-term soil moisture, and subsequently, soil biogeochemistry dynamics at a crucial landscape scale.

  6. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel–CO2

    Science.gov (United States)

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-01-01

    The measured 14C:12C isotopic ratio of atmospheric CO2 (and its associated derived Δ14C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff). Given enough such measurements, independent top-down estimation of US fossil fuel-CO2 emissions should be possible. However, the number of Δ14C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ14C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ14C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and potentially

  7. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel CO2

    Science.gov (United States)

    Coakley, Kevin J.; Miller, John B.; Montzka, Stephen A.; Sweeney, Colm; Miller, Ben R.

    2016-06-01

    The measured 14C:12C isotopic ratio of atmospheric CO2 (and its associated derived Δ14C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff). Given enough such measurements, independent top-down estimation of U.S. fossil fuel CO2 emissions should be possible. However, the number of Δ14C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ14C is therefore measured in just a small fraction of samples obtained by flask air sampling networks around the world. Here we develop a projection pursuit regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/Earth System Research Laboratory (ESRL) Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halocarbon and hydrocarbon acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 to 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ14C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of

  8. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  9. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  10. Daily river flow prediction based on Two-Phase Constructive Fuzzy Systems Modeling: A case of hydrological - meteorological measurements asymmetry

    Science.gov (United States)

    Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann

    2018-03-01

    Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.

  11. Hydrologic model calibration using remotely sensed soil moisture and discharge measurements: The impact on predictions at gauged and ungauged locations

    Science.gov (United States)

    Li, Yuan; Grimaldi, Stefania; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2018-02-01

    The skill of hydrologic models, such as those used in operational flood prediction, is currently restricted by the availability of flow gauges and by the quality of the streamflow data used for calibration. The increased availability of remote sensing products provides the opportunity to further improve the model forecasting skill. A joint calibration scheme using streamflow measurements and remote sensing derived soil moisture values was examined and compared with a streamflow only calibration scheme. The efficacy of the two calibration schemes was tested in three modelling setups: 1) a lumped model; 2) a semi-distributed model with only the outlet gauge available for calibration; and 3) a semi-distributed model with multiple gauges available for calibration. The joint calibration scheme was found to slightly degrade the streamflow prediction at gauged sites during the calibration period compared with streamflow only calibration, but improvement was found at the same gauged sites during the independent validation period. A more consistent and statistically significant improvement was achieved at gauged sites not used in the calibration, due to the spatial information introduced by the remotely sensed soil moisture data. It was also found that the impact of using soil moisture for calibration tended to be stronger at the upstream and tributary sub-catchments than at the downstream sub-catchments.

  12. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  13. Sagittal range of motion of the thoracic spine using inertial tracking device and effect of measurement errors on model predictions.

    Science.gov (United States)

    Hajibozorgi, M; Arjmand, N

    2016-04-11

    Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  15. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    in Morison's equation for the wave load. The second procedure is a non-linear, single degree-of-freedom method making use of thetheory of diffusion processes. This procedure is able to account for the spectral content in therandom waves and requires much lesser computational effort than the time simulation...... and irregular waves.In the paper these results will be compared with theoretical predictions based on two different methods. The first is time simulation procedure using various stretching methods for the wavekinematics above the still water level and applying both the relative and absolute velocityassumption...... results. This finding is discussed in the light of other results quoted inthe literature and some general guidelines are extracted....

  16. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  17. Comparison of high pressure transient PVT measurements and model predictions. Part I.

    Energy Technology Data Exchange (ETDEWEB)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Evans, Gregory Herbert; Rice, Steven F.; Winters, William Stanley, Jr.

    2010-07-01

    A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must be confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.

  18. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  19. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  20. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  1. Predicting responses from Rasch measures.

    Science.gov (United States)

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  2. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  3. Predictability of investment behavior from brain information measured by functional near-infrared spectroscopy: a bayesian neural network model.

    Science.gov (United States)

    Shimokawa, T; Suzuki, K; Misawa, T; Miyagawa, K

    2009-06-30

    In line with previous studies using fMRI and as is apparent from experimental results, cerebral blood flow (oxygenated hemoglobin (oxyHb) concentration) in the medial prefrontal cortex (MPFC) and orbital cortex (OFC) as is observed with fNIRS (functional near-infrared spectroscopy) is presumed to be closely related to reward prediction and risk prediction as part of decision-making under risk. Results of analysis using a predictive model with a three-layer perceptron revealed that changes in the oxyHb concentration in cerebral blood as indicated by fNIRS observation include information to effectively predict investment behavior. This paper indicates that adding oxyHb concentration at the aforementioned sites in the brain as a predictive factor allows prediction of subjects' investment behavior with a considerable degree of precision. This fact indicates that information provided by fNIRS allows valid analysis of investment behavior and it also suggests a wide-ranging practical applicability for this information like investment assistance using fNIRS.

  4. State resolved measurements of a (1)CH(2) removal confirm predictions of the gateway model for electronic quenching.

    Science.gov (United States)

    Gannon, K L; Blitz, M A; Kovács, T; Pilling, M J; Seakins, P W

    2010-01-14

    Collisional quenching of electronically excited states by inert gases is a fundamental physical process. For reactive excited species such as singlet methylene, (1)CH(2), the competition between relaxation and reaction has important implications in practical systems such as combustion. The gateway model has previously been applied to the relaxation of (1)CH(2) by inert gases [U. Bley and F. Temps, J. Chem. Phys. 98, 1058 (1993)]. In this model, gateway states with mixed singlet and triplet character allow conversion between the two electronic states. The gateway model makes very specific predictions about the relative relaxation rates of ortho and para quantum states of methylene at low temperatures; relaxation from para gateway states leads to faster deactivation independent of the nature of the collision partner. Experimental data are reported here which for the first time confirm these predictions at low temperatures for helium. However, it was found that in contrast with the model predictions, the magnitude of the effect decreases with increasing size of the collision partner. It is proposed that the attractive potential energy surface for larger colliders allows alternative gateway states to contribute to relaxation removing the dominance of the para gateway states.

  5. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  6. Predictive accuracy of novel risk factors and markers: A simulation study of the sensitivity of different performance measures for the Cox proportional hazards regression model

    NARCIS (Netherlands)

    P.C. Austin (Peter); Pencinca, M.J. (Michael J.); E.W. Steyerberg (Ewout)

    2017-01-01

    textabstractPredicting outcomes that occur over time is important in clinical, population health, and health services research. We compared changes in different measures of performance when a novel risk factor or marker was added to an existing Cox proportional hazards regression model. We performed

  7. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  8. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    Science.gov (United States)

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  9. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  10. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  11. Comparison of Shuttle Imaging Radar-B ocean wave image spectra with linear model predictions based on aircraft measurements

    Science.gov (United States)

    Monaldo, Frank M.; Lyzenga, David R.

    1988-01-01

    During October 1984, coincident Shuttle Imaging Radar-B synthetic aperture radar (SAR) imagery and wave measurements from airborne instrumentation were acquired. The two-dimensional wave spectrum was measured by both a radar ocean-wave spectrometer and a surface-contour radar aboard the aircraft. In this paper, two-dimensional SAR image intensity variance spectra are compared with these independent measures of ocean wave spectra to verify previously proposed models of the relationship between such SAR image spectra and ocean wave spectra. The results illustrate both the functional relationship between SAR image spectra and ocean wave spectra and the limitations imposed on the imaging of short-wavelength, azimuth-traveling waves.

  12. Influence of Temperature, Relative Humidity, and Soil Properties on the Soil-Air Partitioning of Semivolatile Pesticides: Laboratory Measurements and Predictive Models.

    Science.gov (United States)

    Davie-Martin, Cleo L; Hageman, Kimberly J; Chin, Yu-Ping; Rougé, Valentin; Fujita, Yuki

    2015-09-01

    Soil-air partition coefficient (Ksoil-air) values are often employed to investigate the fate of organic contaminants in soils; however, these values have not been measured for many compounds of interest, including semivolatile current-use pesticides. Moreover, predictive equations for estimating Ksoil-air values for pesticides (other than the organochlorine pesticides) have not been robustly developed, due to a lack of measured data. In this work, a solid-phase fugacity meter was used to measure the Ksoil-air values of 22 semivolatile current- and historic-use pesticides and their degradation products. Ksoil-air values were determined for two soils (semiarid and volcanic) under a range of environmentally relevant temperature (10-30 °C) and relative humidity (30-100%) conditions, such that 943 Ksoil-air measurements were made. Measured values were used to derive a predictive equation for pesticide Ksoil-air values based on temperature, relative humidity, soil organic carbon content, and pesticide-specific octanol-air partition coefficients. Pesticide volatilization losses from soil, calculated with the newly derived Ksoil-air predictive equation and a previously described pesticide volatilization model, were compared to previous results and showed that the choice of Ksoil-air predictive equation mainly affected the more-volatile pesticides and that the way in which relative humidity was accounted for was the most critical difference.

  13. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    Science.gov (United States)

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  15. Sex prediction from morphometric palatal rugae measures.

    Science.gov (United States)

    Saadeh, Ma; Ghafari, J G; Haddad, R V; Ayoub, F

    2017-07-01

    While abundant research has been conducted on palatal rugae (PR), the literature pertaining to the sex dimorphism of the palatal rugae and their use for sex prediction is inconclusive. Moreover, palatal rugae have been classified into categories based on length, shape, direction and unification but accurate rugal morphometric linear and angular measurements have not yet been reported. The aims of this study were to -1- assess the dimensions and bilateral symmetry of the first three palatal rugae in an adult population and -2- explore sex dimorphism and the ability to predict sex from palatal rugae measurements. The maxillary dental casts of 252 non-growing subjects (119 males, 130 females, mean age 25.6 ± 7.7 years) were scanned using a laser system (Perceptron ScanWorks® V5). Angular and linear transverse and anteroposteior measures of the first three palatal rugae were recorded. Independent samples t-tests and paired samples t-tests were used to test for side related differences and sex dimorphism. Multiple logistic regression was employed to model sex using associated palatal rugae measures. Palatal rugae exhibited lateral asymmetry in the majority of bilateral measures. Males presented with larger values for 9 out of 28 parameters. Four linear rugae measurements and one angular measurement together correctly classified 71.4% of the subjects in their true gender. Morphometric palatal rugae measurements demonstrated promising usefulness in sex prediction. Recording morphometric linear and angular measures is recommended as an adjunct to the commonly used classification based on the shapes of rugae.

  16. Tropospheric scintillation prediction models for a high elevation angle based on measured data from a tropical region

    Science.gov (United States)

    Abdul Rahim, Nadirah Binti; Islam, Md. Rafiqul; J. S., Mandeep; Dao, Hassan; Bashir, Saad Osman

    2013-12-01

    The recent rapid evolution of new satellite services, including VSAT for internet access, LAN interconnection and multimedia applications, has triggered an increasing demand for bandwidth usage by satellite communications. However, these systems are susceptible to propagation effects that become significant as the frequency increases. Scintillation is the rapid signal fluctuation of the amplitude and phase of a radio wave, which is significant in tropical climates. This paper presents the analysis of the tropospheric scintillation data for satellite to Earth links at the Ku-band. Twelve months of data (January-December 2011) were collected and analyzed to evaluate the effect of tropospheric scintillation. Statistics were then further analyzed to inspect the seasonal, worst-month, diurnal and rain-induced scintillation effects. By employing the measured scintillation data, a modification of the Karasawa model for scintillation fades and enhancements is proposed based on data measured in Malaysia.

  17. Firmness prediction in Prunus persica 'Calrico' peaches by visible/short-wave near infrared spectroscopy and acoustic measurements using optimised linear and non-linear chemometric models.

    Science.gov (United States)

    Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio

    2015-08-15

    In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.

  18. SUPPORT VECTOR MACHINE METHOD FOR PREDICTING INVESTMENT MEASURES

    Directory of Open Access Journals (Sweden)

    Olga V. Kitova

    2016-01-01

    Full Text Available Possibilities of applying intelligent machine learning technique based on support vectors for predicting investment measures are considered in the article. The base features of support vector method over traditional econometric techniques for improving the forecast quality are described. Computer modeling results in terms of tuning support vector machine models developed with programming language Python for predicting some investment measures are shown.

  19. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  20. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    Science.gov (United States)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without

  1. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    Science.gov (United States)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that

  2. Predicting the liveweight of sheep by using linear body measurements

    African Journals Online (AJOL)

    -Awuah, E Ampofo, R Dodoo. Abstract. The relationship between live body weight and some linear body measurements using data on sheep are explored. Predictive models for body weight were then fitted to the data. The optimum model for ...

  3. Micrometeorological measurement of hexachlorobenzene and polychlorinated biphenyl compound air-water gas exchange in Lake Superior and comparison to model predictions

    Directory of Open Access Journals (Sweden)

    M. D. Rowe

    2012-05-01

    Full Text Available Air-water exchange fluxes of persistent, bioaccumulative and toxic (PBT substances are frequently estimated using the Whitman two-film (W2F method, but micrometeorological flux measurements of these compounds over water are rarely attempted. We measured air-water exchange fluxes of hexachlorobenzene (HCB and polychlorinated biphenyls (PCBs on 14 July 2006 in Lake Superior using the modified Bowen ratio (MBR method. Measured fluxes were compared to estimates using the W2F method, and to estimates from an Internal Boundary Layer Transport and Exchange (IBLTE model that implements the NOAA COARE bulk flux algorithm and gas transfer model. We reveal an inaccuracy in the estimate of water vapor transfer velocity that is commonly used with the W2F method for PBT flux estimation, and demonstrate the effect of use of an improved estimation method. Flux measurements were conducted at three stations with increasing fetch in offshore flow (15, 30, and 60 km in southeastern Lake Superior. This sampling strategy enabled comparison of measured and predicted flux, as well as modification in near-surface atmospheric concentration with fetch, using the IBLTE model. Fluxes estimated using the W2F model were compared to fluxes measured by MBR. In five of seven cases in which the MBR flux was significantly greater than zero, concentration increased with fetch at 1-m height, which is qualitatively consistent with the measured volatilization flux. As far as we are aware, these are the first reported ship-based micrometeorological air-water exchange flux measurements of PCBs.

  4. A comparison of forest height prediction from FIA field measurement and LiDAR data via spatial models

    Science.gov (United States)

    Yuzhen Li

    2009-01-01

    Previous studies have shown a high correspondence between tree height measurements acquired from airborne LiDAR and that those measured using conventional field techniques. Though these results are very promising, most of the studies were conducted over small experimental areas and tree height was measured carefully or using expensive instruments in the field, which is...

  5. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  6. Prediction of harmful water quality parameters combining weather, air quality and ecosystem models with in situ measurement

    Science.gov (United States)

    The ability to predict water quality in lakes is important since lakes are sources of water for agriculture, drinking, and recreational uses. Lakes are also home to a dynamic ecosystem of lacustrine wetlands and deep waters. They are sensitive to pH changes and are dependent on d...

  7. Model Predictions and Measured Skin Damage Thresholds for 1.54 Micrometers Laser Pulses in Porcine Skin

    National Research Council Canada - National Science Library

    Roach, William P; Cain, Clarence; Schuster, Kurt; Stockton, Kevin; Stolarski, David S; Galloway, Robert; Rockwell, Benjamin

    2004-01-01

    .... Expanding on this preliminary source-term model using a Gaussian profile to describe the spatial extent of laser pulse interaction in skin, we report on the coupling of temporal consideration to the model...

  8. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  9. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  10. Using a novel flood prediction model and GIS automation to measure the valley and channel morphology of large river networks

    Science.gov (United States)

    Traditional methods for measuring river valley and channel morphology require intensive ground-based surveys which are often expensive, time consuming, and logistically difficult to implement. The number of surveys required to assess the hydrogeomorphic structure of large river n...

  11. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  12. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  14. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  15. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  16. Incorporation of CT-based measurements of trunk anatomy into subject-specific musculoskeletal models of the spine influences vertebral loading predictions.

    Science.gov (United States)

    Bruno, Alexander G; Mokhtarzadeh, Hossein; Allaire, Brett T; Velie, Kelsey R; De Paolis Kaluza, M Clara; Anderson, Dennis E; Bouxsein, Mary L

    2017-10-01

    We created subject-specific musculoskeletal models of the thoracolumbar spine by incorporating spine curvature and muscle morphology measurements from computed tomography (CT) scans to determine the degree to which vertebral compressive and shear loading estimates are sensitive to variations in trunk anatomy. We measured spine curvature and trunk muscle morphology using spine CT scans of 125 men, and then created four different thoracolumbar spine models for each person: (i) height and weight adjusted (Ht/Wt models); (ii) height, weight, and spine curvature adjusted (+C models); (iii) height, weight, and muscle morphology adjusted (+M models); and (iv) height, weight, spine curvature, and muscle morphology adjusted (+CM models). We determined vertebral compressive and shear loading at three regions of the spine (T8, T12, and L3) for four different activities. Vertebral compressive loads predicted by the subject-specific CT-based musculoskeletal models were between 54% lower to 45% higher from those estimated using musculoskeletal models adjusted only for subject height and weight. The impact of subject-specific information on vertebral loading estimates varied with the activity and spinal region. Vertebral loading estimates were more sensitive to incorporation of subject-specific spinal curvature than subject-specific muscle morphology. Our results indicate that individual variations in spine curvature and trunk muscle morphology can have a major impact on estimated vertebral compressive and shear loads, and thus should be accounted for when estimating subject-specific vertebral loading. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2164-2173, 2017. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  17. Predicted serum folate concentrations based on in vitro studies and kinetic modeling are consistent with measured folate concentrations in humans

    NARCIS (Netherlands)

    Verwei, M.; Freidig, A.P.; Havenaar, R.; Groten, J.P.

    2006-01-01

    The nutritional quality of new functional or fortified food products depends on the bioavailability of the nutrient(s) in the human body. Bioavailability is often determined in human intervention studies by measurements of plasma or serum profiles over a certain time period. These studies are time

  18. Predictive Modeling of Polymer Mechanical Behavior Coupled to Chemical Change/ Technique Development for Measuring Polymer Physical Aging.

    Energy Technology Data Exchange (ETDEWEB)

    Kropka, Jamie Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stavig, Mark E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Arechederra, Gabe Kenneth [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McCoy, John D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Develop an understanding of the evolution of glassy polymer mechanical response during aging and the mechanisms associated with that evolution. That understanding will be used to develop constitutive models to assess the impact of stress evolution in encapsulants on NW designs.

  19. Comparison of co-located independent ground-based middle atmospheric wind and temperature measurements with numerical weather prediction models

    NARCIS (Netherlands)

    Le Pichon, A.; Assink, J.D.; Heinrich, P.; Blanc, E.; Charlton-Perez, A.; Lee, C.F.; Keckhut, P.; Hauchecorne, A.; Rufenacht, R.; Kampfer, N.; Drob, D.P.; Smets, P.S.M.; Evers, L.G.; Ceranna, L.; Pilger, C.; Ross, O.; Claud, C.

    2015-01-01

    High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere

  20. Comparing predicted estrogen concentrations with measurements in US waters

    International Nuclear Information System (INIS)

    Kostich, Mitch; Flick, Robert; Martinson, John

    2013-01-01

    The range of exposure rates to the steroidal estrogens estrone (E1), beta-estradiol (E2), estriol (E3), and ethinyl estradiol (EE2) in the aquatic environment was investigated by modeling estrogen introduction via municipal wastewater from sewage plants across the US. Model predictions were compared to published measured concentrations. Predictions were congruent with most of the measurements, but a few measurements of E2 and EE2 exceed those that would be expected from the model, despite very conservative model assumptions of no degradation or in-stream dilution. Although some extreme measurements for EE2 may reflect analytical artifacts, remaining data suggest concentrations of E2 and EE2 may reach twice the 99th percentile predicted from the model. The model and bulk of the measurement data both suggest that cumulative exposure rates to humans are consistently low relative to effect levels, but also suggest that fish exposures to E1, E2, and EE2 sometimes substantially exceed chronic no-effect levels. -- Highlights: •Conservatively modeled steroidal estrogen concentrations in ambient water. •Found reasonable agreement between model and published measurements. •Model and measurements agree that risks to humans are remote. •Model and measurements agree significant questions remain about risk to fish. •Need better understanding of temporal variations and their impact on fish. -- Our model and published measurements for estrogens suggest aquatic exposure rates for humans are below potential effect levels, but fish exposure sometimes exceeds published no-effect levels

  1. Prediction of mirror performance from laboratory measurements

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.

    1989-01-01

    This paper describes and illustrates a simple method of predicting the imaging performance of synchrotron mirrors from laboratory measurements of their profiles. It discusses the important role of the transverse coherence length of the incident radiation, the fractal-like form of the mirror roughness, mirror characterization, and the use of closed-form expressions for the predicted image intensities

  2. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  3. Predicting live weight using body measurements in Afar goats in ...

    African Journals Online (AJOL)

    Bheema

    breeds. According to Yakubu et al. (2011), since LW and body measurement parameters vary with breed and environment, breed specific models need to be developed. Therefore, this study was aimed at developing models for predicting LW of Afar goats using body measurements based on data collected in Gulina woreda ...

  4. Comparison of secondhand smoke exposure measures during pregnancy in the development of a clinical prediction model for small-for-gestational-age among non-smoking Chinese pregnant women.

    Science.gov (United States)

    Xie, Chuanbo; Wen, Xiaozhong; Niu, Zhongzheng; Ding, Peng; Liu, Tao; He, Yanhui; Lin, Jianmiao; Yuan, Shixin; Guo, Xiaoling; Jia, Deqin; Chen, Weiqing

    2015-10-01

    To compare predictive values of small-for-gestational-age (SGA) by different measures for secondhand smoke (SHS) exposure during pregnancy and to develop and validate a prediction model for SGA using SHS exposure along with sociodemographic and pregnancy factors. We compared the predictability of different measures of SHS exposure during pregnancy for SGA among 545 Chinese pregnant women, and then used the optimal SHS measure along with other clinically available factors to develop and validate a prediction model for SGA. We fit logistic regression models to predict SGA by single measures of SHS exposure (self-report, serum cotinine and CYP2A6*4) and different combinations (self-report+cotinine, cotinine+CYP2A6*4, self-report+CYP2A6*4 and self-report+cotinine+CYP2A6*4). We found that self-reported SHS exposure alone predicted SGA (area under the receiver operating characteristic curve or area under the receiver operating curve (AUROC), 0.578) better than the other two single measures (cotinine, 0.547; CYP2A6*4, 0.529) or as accurately as combined SHS measures (0.545-0.584). The final prediction model that contained self-reported SHS exposure, prepregnancy body mass index, gestational weight gain velocity during the second and third trimesters, gestational diabetes, gestational hypertension and the third-trimester biparietal diameter Z-score could predict SGA fairly accurately (AUROC, 0.698). Self-reported SHS exposure at peribirth performs better in predicting SGA than a single measure of serum cotinine at the same time, although repeated biochemical cotinine assessments throughout pregnancy may be optimal. Our simple prediction model is fairly accurate and can be potentially used in routine prenatal care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. A comparison of the predictive performance of three pharmacokinetic models for propofol using measured values obtained during target-controlled infusion

    NARCIS (Netherlands)

    Glen, J. B.; White, M.

    2014-01-01

    We compared the predictive performance of the existing Diprifusor and Schnider models, used for target-controlled infusion of propofol, with a new modification of the Diprifusor model (White) incorporating age and sex covariates. The bias and inaccuracy (precision) of each model were determined

  6. Coupling hydrodynamic modeling and empirical measures of bed mobility to predict the risk of scour and fill of salmon redds in a large regulated river

    Science.gov (United States)

    May, Christine L.; Pryor, Bonnie; Lisle, Thomas E.; Lang, Margaret

    2009-05-01

    In order to assess the risk of scour and fill of spawning redds during floods, an understanding of the relations among river discharge, bed mobility, and scour and fill depths in areas of the streambed heavily utilized by spawning salmon is needed. Our approach coupled numerical flow modeling and empirical data from the Trinity River, California, to quantify spatially explicit zones of differential bed mobility and to identify specific areas where scour and fill is deep enough to impact redd viability. Spatial patterns of bed mobility, based on model-predicted Shields stress, indicate that a zone of full mobility was limited to a central core that expanded with increasing flow strength. The likelihood and maximum depth of measured scour increased with increasing modeled Shields stress. Because redds were preferentially located in coarse substrate in shallow areas with close proximity to the stream banks, they were less likely to become mobilized or to risk deep scour during high-flow events but were more susceptible to sediment deposition.

  7. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  8. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    Science.gov (United States)

    Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    2017-08-01

    In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume

  9. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  10. Feature Selection, Flaring Size and Time-to-Flare Prediction Using Support Vector Regression, and Automated Prediction of Flaring Behavior Based on Spatio-Temporal Measures Using Hidden Markov Models

    Science.gov (United States)

    Al-Ghraibah, Amani

    Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average

  11. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  13. Integration of the predictions of two models with dose measurements in a case study of children exposed to the emissions of a lead smelter

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, R.; McKone, T.E.

    2009-03-01

    The predictions of two source-to-dose models are systematically evaluated with observed data collected in a village polluted by a currently operating secondary lead smelter. Both models were built up from several sub-models linked together and run using Monte-Carlo simulation, to calculate the distribution children's blood lead levels attributable to the emissions from the facility. The first model system is composed of the CalTOX model linked to a recoded version of the IEUBK model. This system provides the distribution of the media-specific lead concentrations (air, soil, fruit, vegetables and blood) in the whole area investigated. The second model consists of a statistical model to estimate the lead deposition on the ground, a modified version of the model HHRAP and the same recoded version of the IEUBK model. This system provides an estimate of the concentration of exposure of specific individuals living in the study area. The predictions of the first model system were improved in terms of accuracy and precision by performing a sensitivity analysis and using field data to correct the default value provided for the leaf wet density. However, in this case study, the first model system tends to overestimate the exposure due to exposed vegetables. The second model was tested for nine children with contrasting exposure conditions. It managed to capture the blood levels for eight of them. In the last case, the exposure of the child by pathways not considered in the model may explain the failure of the model. The interest of this integrated model is to provide outputs with lower variance than the first model system, but at the moment further tests are necessary to conclude about its accuracy.

  14. Prediction models and development of an easy to use open-access tool for measuring lung function of individuals with motor complete spinal cord injury

    NARCIS (Netherlands)

    Mueller, Gabi; de Groot, Sonja; van der Woude, Lucas H.; Perret, Claudio; Michel, Franz; Hopman, Maria T. E.

    Objective: To develop statistical models to predict lung function and respiratory muscle strength from personal and lesion characteristics of individuals with motor complete spinal cord injury. Design: Cross-sectional, multi-centre cohort study. Subjects: A total of 440 individuals with traumatic,

  15. Predictive accuracy of risk factors and markers: a simulation study of the effect of novel markers on different performance measures for logistic regression models.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2013-02-20

    The change in c-statistic is frequently used to summarize the change in predictive accuracy when a novel risk factor is added to an existing logistic regression model. We explored the relationship between the absolute change in the c-statistic, Brier score, generalized R(2) , and the discrimination slope when a risk factor was added to an existing model in an extensive set of Monte Carlo simulations. The increase in model accuracy due to the inclusion of a novel marker was proportional to both the prevalence of the marker and to the odds ratio relating the marker to the outcome but inversely proportional to the accuracy of the logistic regression model with the marker omitted. We observed greater improvements in model accuracy when the novel risk factor or marker was uncorrelated with the existing predictor variable compared with when the risk factor has a positive correlation with the existing predictor variable. We illustrated these findings by using a study on mortality prediction in patients hospitalized with heart failure. In conclusion, the increase in predictive accuracy by adding a marker should be considered in the context of the accuracy of the initial model. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Predicting and measuring fluid responsiveness with echocardiography

    Directory of Open Access Journals (Sweden)

    Ashley Miller

    2016-06-01

    Full Text Available Echocardiography is ideally suited to guide fluid resuscitation in critically ill patients. It can be used to assess fluid responsiveness by looking at the left ventricle, aortic outflow, inferior vena cava and right ventricle. Static measurements and dynamic variables based on heart–lung interactions all combine to predict and measure fluid responsiveness and assess response to intravenous fluid esuscitation. Thorough knowledge of these variables, the physiology behind them and the pitfalls in their use allows the echocardiographer to confidently assess these patients and in combination with clinical judgement manage them appropriately.

  17. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  18. Temperature Measurement and Numerical Prediction in Machining Inconel 718.

    Science.gov (United States)

    Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-06-30

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.

  19. Temperature Measurement and Numerical Prediction in Machining Inconel 718

    Science.gov (United States)

    Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-01-01

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312

  20. Prediction with measurement errors in finite populations.

    Science.gov (United States)

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2012-02-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.

  1. Measurement and prediction of enteric methane emission

    Science.gov (United States)

    Sejian, Veerasamy; Lal, Rattan; Lakritz, Jeffrey; Ezeji, Thaddeus

    2011-01-01

    The greenhouse gas (GHG) emissions from the agricultural sector account for about 25.5% of total global anthropogenic emission. While CO2 receives the most attention as a factor relative to global warming, CH4, N2O and chlorofluorocarbons (CFCs) also cause significant radiative forcing. With the relative global warming potential of 25 compared with CO2, CH4 is one of the most important GHGs. This article reviews the prediction models, estimation methodology and strategies for reducing enteric CH4 emissions. Emission of CH4 in ruminants differs among developed and developing countries, depending on factors like animal species, breed, pH of rumen fluid, ratio of acetate:propionate, methanogen population, composition of diet and amount of concentrate fed. Among the ruminant animals, cattle contribute the most towards the greenhouse effect through methane emission followed by sheep, goats and buffalos, respectively. The estimated CH4 emission rate per cattle, buffaloe, sheep and goat in developed countries are 150.7, 137, 21.9 and 13.7 (g/animal/day) respectively. However, the estimated rates in developing countries are significantly lower at 95.9 and 13.7 (g/animal/day) per cattle and sheep, respectively. There exists a strong interest in developing new and improving the existing CH4 prediction models to identify mitigation strategies for reducing the overall CH4 emissions. A synthesis of the available literature suggests that the mechanistic models are superior to empirical models in accurately predicting the CH4 emission from dairy farms. The latest development in prediction model is the integrated farm system model which is a process-based whole-farm simulation technique. Several techniques are used to quantify enteric CH4 emissions starting from whole animal chambers to sulfur hexafluoride (SF6) tracer techniques. The latest technology developed to estimate CH4 more accurately is the micrometeorological mass difference technique. Because the conditions under which

  2. Limited predictive value of the IDF definition of metabolic syndrome for the diagnosis of insulin resistance measured with the oral minimal model.

    Science.gov (United States)

    Ghanassia, E; Raynaud de Mauverger, E; Brun, J-F; Fedou, C; Mercier, J

    2009-01-01

    To assess the agreement of the NCEP ATP-III and the IDF definitions of metabolic syndrome and to determine their predictive values for the diagnosis of insulin resistance. For this purpose, we recruited 150 subjects (94 women and 56 men) and determined the presence of metabolic syndrome using the NCEP-ATP III and IDF definitions. We evaluated their insulin sensitivity S(I) using Caumo's oral minimal model after a standardized hyperglucidic breakfast test. Subjects whose S(I) was in the lowest quartile were considered as insulin resistant. We then calculated sensitivity, specificity, positive and negative predictive values of both definitions for the diagnosis of insulin resistance. The prevalence of metabolic syndrome was 37.4% (NCEP-ATP III) and 40% (IDF). Agreement between the two definitions was 96%. Using NCEP-ATP III and IDF criteria for the identification of insulin resistant subjects, sensitivity was 55.3% and 63%, specificity was 68.8% and 67.8%, positive predictive value was 37.5% and 40%, negative predictive value was 81.9% and 84.5%, respectively. Positive predictive value increased with the number of criteria for both definitions. Whatever the definition, the scoring of metabolic syndrome is not a reliable tool for the individual diagnosis of insulin resistance, and is more useful for excluding this diagnosis.

  3. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  5. Novel Modeling of Task vs. Rest Brain State Predictability Using a Dynamic Time Warping Spectrum: Comparisons and Contrasts with Other Standard Measures of Brain Dynamics.

    Science.gov (United States)

    Dinov, Martin; Lorenz, Romy; Scott, Gregory; Sharp, David J; Fagerholm, Erik D; Leech, Robert

    2016-01-01

    Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT) are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches, and the sum squared error (SSE) from a multilayer perceptron (MLP) prediction of the EEG time series, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach.

  6. Novel modeling of task versus rest brain state predictability using a dynamic time warping spectrum: comparisons and contrasts with other standard measures of brain dynamics

    Directory of Open Access Journals (Sweden)

    Martin eDinov

    2016-05-01

    Full Text Available Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches and the sum squared error (SSE from a multilayer perceptron (MLP prediction of the EEG timeseries, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach.

  7. The prediction of BRDFs from surface profile measurements

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.; Leonard, T.A.

    1989-01-01

    This paper discusses methods of predicting the BRDF of smooth surfaces from profile measurements of their surface finish. The conversion of optical profile data to the BRDF at the same wavelength is essentially independent of scattering models, while the conversion of mechanical measurements, and wavelength scaling in general, are model dependent. Procedures are illustrated for several surfaces, including two from the recent HeNe BRDF round robin, and results are compared with measured data. Reasonable agreement is found except for surfaces which involve significant scattering from isolated surface defects which are poorly sampled in the profile data

  8. Precise models deserve precise measures

    Directory of Open Access Journals (Sweden)

    Benjamin E. Hilbig

    2010-07-01

    Full Text Available The recognition heuristic (RH --- which predicts non-compensatory reliance on recognition in comparative judgments --- has attracted much research and some disagreement, at times. Most studies have dealt with whether or under which conditions the RH is truly used in paired-comparisons. However, even though the RH is a precise descriptive model, there has been less attention concerning the precision of the methods applied to measure RH-use. In the current work, I provide an overview of different measures of RH-use tailored to the paradigm of natural recognition which has emerged as a preferred way of studying the RH. The measures are compared with respect to different criteria --- with particular emphasis on how well they uncover true use of the RH. To this end, both simulations and a re-analysis of empirical data are presented. The results indicate that the adherence rate --- which has been pervasively applied to measure RH-use --- is a severely biased measure. As an alternative, a recently developed formal measurement model emerges as the recommended candidate for assessment of RH-use.

  9. Development of quantitative structure-activity relationship (QSAR) models to predict the carcinogenic potency of chemicals. II. Using oral slope factor as a measure of carcinogenic potency.

    Science.gov (United States)

    Wang, Nina Ching Yi; Venkatapathy, Raghuraman; Bruce, Robert Mark; Moudgal, Chandrika

    2011-03-01

    The overall risk associated with exposure to a chemical is determined by combining quantitative estimates of exposure to the chemical with their known health effects. For chemicals that cause carcinogenicity, oral slope factors (OSFs) and inhalation unit risks are used to quantitatively estimate the carcinogenic potency or the risk associated with exposure to the chemical by oral or inhalation route, respectively. Frequently, there is a lack of animal or human studies in the literature to determine OSFs. This study aims to circumvent this problem by developing quantitative structure-activity relationship (QSAR) models to predict the OSFs of chemicals. The OSFs of 70 chemicals based on male/female human, rat, and mouse bioassay data were obtained from the United States Environmental Protection Agency's Integrated Risk Information System (IRIS) database. A global QSAR model that considered all 70 chemicals as well as species and/or sex-specific QSARs were developed in this study. Study results indicate that the species and sex-specific QSARs (r(2)>0.8, q(2)>0.7) had a better predictive abilities than the global QSAR developed using data from all species and sexes (r(2)=0.77, q(2)=0.73). The QSARs developed in this study were externally validated, and demonstrated reasonable predictive abilities. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Measuring and modelling concurrency

    Science.gov (United States)

    Sawers, Larry

    2013-01-01

    This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships): measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction) at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case. PMID:23406964

  11. Prediction of preterm birth in multiple pregnancies: development of a multivariable model including cervical length measurement at 16 to 21 weeks' gestation.

    Science.gov (United States)

    van de Mheen, Lidewij; Schuit, Ewoud; Lim, Arianne C; Porath, Martina M; Papatsonis, Dimitri; Erwich, Jan J; van Eyck, Jim; van Oirschot, Charlotte M; Hummel, Piet; Duvekot, Johannes J; Hasaart, Tom H M; Groenwold, Rolf H H; Moons, Karl G M; de Groot, Christianne J M; Bruinse, Hein W; van Pampus, Maria G; Mol, Ben W J

    2014-04-01

    To develop a multivariable prognostic model for the risk of preterm delivery in women with multiple pregnancy that includes cervical length measurement at 16 to 21 weeks' gestation and other variables. We used data from a previous randomized trial. We assessed the association between maternal and pregnancy characteristics including cervical length measurement at 16 to 21 weeks' gestation and time to delivery using multivariable Cox regression modelling. Performance of the final model was assessed for the outcomes of preterm and very preterm delivery using calibration and discrimination measures. We studied 507 women, of whom 270 (53%) delivered models for preterm and very preterm delivery had a c-index of 0.68 (95% CI 0.63 to 0.72) and 0.68 (95% CI 0.62 to 0.75), respectively, and showed good calibration. In women with a multiple pregnancy, the risk of preterm delivery can be assessed with a multivariable model incorporating cervical length and other predictors.

  12. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  13. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  14. Measurement and prediction of solubilities of active pharmaceutical ingredients.

    Science.gov (United States)

    Hahnenkamp, Inga; Graubner, Gitte; Gmehling, Jürgen

    2010-03-30

    Solubilities of 2-acetoxy benzoic acid (aspirin), N-acetyl-p-aminophenol (paracetamol) and 2-(p-isobutylphenyl)propionic acid (ibuprofen) have been measured in various solvents and compared with published and predicted data. For the prediction besides the two group contribution models UNIFAC and modified UNIFAC (Dortmund) the quantum chemical approach COSMO-RS (Ol) was used. Additionally melting temperatures and heats of fusion for 2-acetoxy benzoic acid, N-acetyl-p-aminophenol and 2-(p-isobutylphenyl)propionic acid required for the calculations have been determined by differential scanning calorimetry. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. Heat exchanger fouling: Prediction, measurement, and mitigation

    Science.gov (United States)

    The US Department of Energy (DOE), Office of Industrial Programs (OIP) sponsors the development of innovative heat exchange systems. Fouling is a major and persistent cost associated with most industrial heat exchangers and nationally wastes an estimated 2.9 Quads per year. To predict and control fouling, three OIP projects are currently exploring heat exchanger fouling in specific industrial applications. A fouling probe has been developed to determine empirically the fouling potential of an industrial gas stream and to derive the fouling thermal resistance. The probe is a hollow metal cylinder capable of measuring the average heat flux along the length of the tube. The local heat flux is also measured by a heat flux meter embedded in the probe wall. The fouling probe has been successfully tested in the laboratory at flue gas temperatures up to 2200 F and a local heat flux up to 41,000 BTU/hr sq ft. The probe has been field tested at a coal-fired boiler plant. Future tests at a municipal waste incinerator are planned. Two other projects study enhanced heat exchanger tubes, specifically the effect of enhanced surface geometries on the tube bundle performance. Both projects include fouling in a liquid heat transfer fluid. Identifying and quantifying the factors affecting fouling in these enhanced heat transfer tubes will lead to techniques to mitigate fouling.

  16. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  17. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  18. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  19. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  20. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  1. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  2. Measuring and Predicting Sleep and Performance During Military Operations

    Science.gov (United States)

    2012-08-23

    stage of sleep. Furthermore, the finding that even relatively brief sleep periods (eg, a 4-hour daily nap following 90 hours of continuous...amounts of SWS obtained. Because normal performance levels are restored recovery sleep periods that include much less sleep time than the amount...Step 1 Step 1 Step 2 work periods sleep periods fatigue level Two-Step Models a b 83 Measuring and Predicting Sleep and Performance During Military

  3. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Landberg, L. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  4. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  5. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  6. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  7. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  8. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  9. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  10. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  11. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  12. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  13. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  14. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  15. Predictive ability of boiler production models | Ogundu | Animal ...

    African Journals Online (AJOL)

    The weekly body weight measurements of a growing strain of Ross broiler were used to compare the of ability of three mathematical models (the multi, linear, quadratic and Exponential) to predict 8 week body weight from early body measurements at weeks I, II, III, IV, V, VI and VII. The results suggest that the three models ...

  16. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  17. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  18. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  19. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    Directory of Open Access Journals (Sweden)

    María Victoria Casares

    2012-01-01

    Full Text Available In order to determine copper toxicity (LC50 to a local species (Cnesterodon decemmaculatus in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L-1. The 96 h Cu LC50 calculated was 0.655 mg L-1 (0.823-0.488. 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L-1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test.

  20. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  1. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  2. Can foot anthropometric measurements predict dynamic plantar surface contact area?

    Directory of Open Access Journals (Sweden)

    Collins Natalie

    2009-10-01

    Full Text Available Abstract Background Previous studies have suggested that increased plantar surface area, associated with pes planus, is a risk factor for the development of lower extremity overuse injuries. The intent of this study was to determine if a single or combination of foot anthropometric measures could be used to predict plantar surface area. Methods Six foot measurements were collected on 155 subjects (97 females, 58 males, mean age 24.5 ± 3.5 years. The measurements as well as one ratio were entered into a stepwise regression analysis to determine the optimal set of measurements associated with total plantar contact area either including or excluding the toe region. The predicted values were used to calculate plantar surface area and were compared to the actual values obtained dynamically using a pressure sensor platform. Results A three variable model was found to describe the relationship between the foot measures/ratio and total plantar contact area (R2 = 0.77, p R2 = 0.76, p Conclusion The results of this study indicate that the clinician can use a combination of simple, reliable, and time efficient foot anthropometric measurements to explain over 75% of the plantar surface contact area, either including or excluding the toe region.

  3. Environmental Measurements and Modeling

    Science.gov (United States)

    Environmental measurement is any data collection activity involving the assessment of chemical, physical, or biological factors in the environment which affect human health. Learn more about these programs and tools that aid in environmental decisions

  4. A prediction model for assessing residential radon concentration in Switzerland

    International Nuclear Information System (INIS)

    Hauri, Dimitri D.; Huss, Anke; Zimmermann, Frank; Kuehni, Claudia E.; Röösli, Martin

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th–90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40–111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69–215 Bq/m³) in the medium category, and 219 Bq/m³ (108–427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be

  5. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  6. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  7. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    Science.gov (United States)

    Angelantonio, Emanuele Di; Gao, Pei; Khan, Hassan; Butterworth, Adam S.; Wormser, David; Kaptoge, Stephen; Kondapally Seshasai, Sreenivasa Rao; Thompson, Alex; Sarwar, Nadeem; Willeit, Peter; Ridker, Paul M; Barr, Elizabeth L.M.; Khaw, Kay-Tee; Psaty, Bruce M.; Brenner, Hermann; Balkau, Beverley; Dekker, Jacqueline M.; Lawlor, Debbie A.; Daimon, Makoto; Willeit, Johann; Njølstad, Inger; Nissinen, Aulikki; Brunner, Eric J.; Kuller, Lewis H.; Price, Jackie F.; Sundström, Johan; Knuiman, Matthew W.; Feskens, Edith J. M.; Verschuren, W. M. M.; Wald, Nicholas; Bakker, Stephan J. L.; Whincup, Peter H.; Ford, Ian; Goldbourt, Uri; Gómez-de-la-Cámara, Agustín; Gallacher, John; Simons, Leon A.; Rosengren, Annika; Sutherland, Susan E.; Björkelund, Cecilia; Blazer, Dan G.; Wassertheil-Smoller, Sylvia; Onat, Altan; Marín Ibañez, Alejandro; Casiglia, Edoardo; Jukema, J. Wouter; Simpson, Lara M.; Giampaoli, Simona; Nordestgaard, Børge G.; Selmer, Randi; Wennberg, Patrik; Kauhanen, Jussi; Salonen, Jukka T.; Dankner, Rachel; Barrett-Connor, Elizabeth; Kavousi, Maryam; Gudnason, Vilmundur; Evans, Denis; Wallace, Robert B.; Cushman, Mary; D’Agostino, Ralph B.; Umans, Jason G.; Kiyohara, Yutaka; Nakagawa, Hidaeki; Sato, Shinichi; Gillum, Richard F.; Folsom, Aaron R.; van der Schouw, Yvonne T.; Moons, Karel G.; Griffin, Simon J.; Sattar, Naveed; Wareham, Nicholas J.; Selvin, Elizabeth; Thompson, Simon G.; Danesh, John

    2015-01-01

    IMPORTANCE The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of cardiovascular disease (CVD) risk. DESIGN, SETTING, AND PARTICIPANTS Analysis of individual-participant data available from 73 prospective studies involving 294 998 participants without a known history of diabetes mellitus or CVD at the baseline assessment. MAIN OUTCOMES AND MEASURES Measures of risk discrimination for CVD outcomes (eg, C-index) and reclassification (eg, net reclassification improvement) of participants across predicted 10-year risk categories of low (<5%), intermediate (5%to <7.5%), and high (≥7.5%) risk. RESULTS During a median follow-up of 9.9 (interquartile range, 7.6-13.2) years, 20 840 incident fatal and nonfatal CVD outcomes (13 237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values and CVD risk changed only slightly after adjustment for total cholesterol and triglyceride concentrations or estimated glomerular filtration rate, but this association attenuated somewhat after adjustment for concentrations of high-density lipoprotein cholesterol and C-reactive protein. The C-index for a CVD risk prediction model containing conventional cardiovascular risk factors alone was 0.7434 (95% CI, 0.7350 to 0.7517). The addition of information on HbA1c was associated with a C-index change of 0.0018 (0.0003 to 0.0033) and a net reclassification improvement of 0.42 (−0.63 to 1.48) for the categories of predicted 10-year CVD risk. The improvement provided by HbA1c assessment in prediction of CVD risk was equal to or better than estimated improvements for

  8. Validation of the measure automobile emissions model : a statistical analysis

    Science.gov (United States)

    2000-09-01

    The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...

  9. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  10. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  11. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  12. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  13. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  14. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  15. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  16. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  8. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  9. Lipid measures and cardiovascular disease prediction

    NARCIS (Netherlands)

    van Wijk, D.F.; Stroes, E.S.G.; Kastelein, J.J.P.

    2009-01-01

    Traditional lipid measures are the cornerstone of risk assessment and treatment goals in cardiovascular prevention. Whereas the association between total, LDL-, HDL-cholesterol and cardiovascular disease risk has been generally acknowledged, the rather poor capacity to distinguish between patients

  10. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  11. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  12. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  13. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  14. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  15. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  16. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  17. Predicting fiber refractive index from a measured preform index profile

    Science.gov (United States)

    Kiiveri, P.; Koponen, J.; Harra, J.; Novotny, S.; Husu, H.; Ihalainen, H.; Kokki, T.; Aallos, V.; Kimmelma, O.; Paul, J.

    2018-02-01

    When producing fiber lasers and amplifiers, silica glass compositions consisting of three to six different materials are needed. Due to the varying needs of different applications, substantial number of different glass compositions are used in the active fiber structures. Often it is not possible to find material parameters for theoretical models to estimate thermal and mechanical properties of those glass compositions. This makes it challenging to predict accurately fiber core refractive index values, even if the preform index profile is measured. Usually the desired fiber refractive index value is achieved experimentally, which is expensive. To overcome this problem, we analyzed statistically the changes between the measured preform and fiber index values. We searched for correlations that would help to predict the Δn-value change from preform to fiber in a situation where we don't know the values of the glass material parameters that define the change. Our index change models were built using the data collected from preforms and fibers made by the Direct Nanoparticle Deposition (DND) technology.

  18. Review of Model Predictions for Extensive Air Showers

    Science.gov (United States)

    Pierog, Tanguy

    In detailed air shower simulations, the uncertainty in the prediction of shower observable for different primary particles and energies is currently dominated by differences between hadronic interaction models. With the results of the first run of the LHC, the difference between post-LHC model predictions has been reduced at the same level as experimental uncertainties of cosmic ray experiments. At the same time new types of air shower observables, like the muon production depth, have been measured, adding new constraints on hadronic models. Currently no model is able to reproduce consistently all mass composition measurements possible with the Pierre Auger Observatory for instance. We review the current model predictions for various particle production observables and their link with air shower observables and discuss the future possible improvements.

  19. Mathematical Model for Prediction of Flexural Strength of Mound ...

    African Journals Online (AJOL)

    The mound soil-cement blended proportions were mathematically optimized by using scheffe's approach and the optimization model developed. A computer program predicting the mix proportion for the model was written. The optimal proportion by the program was used prepare beam samples measuring 150mm x 150mm ...

  20. Modelling and prediction of non-stationary optical turbulence behaviour

    NARCIS (Netherlands)

    Doelman, N.J.; Osborn, J.

    2016-01-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument

  1. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  2. Electrical resistivity measurement to predict uniaxial compressive ...

    Indian Academy of Sciences (India)

    Abstract. Electrical resistivity values of 12 different igneous rocks were measured on core samples using a resistivity meter in the laboratory. The resistivity tests were conducted on the samples fully saturated with brine (NaCl solution) and the uniaxial compressive strength (UCS), Brazilian tensile strength, density and.

  3. Predictability of cardiovascular risks by psychological measures

    Czech Academy of Sciences Publication Activity Database

    Šolcová, Iva; Kebza, V.

    2008-01-01

    Roč. 23, č. 1 (2008), s. 241-241 ISSN 0887-0446 R&D Projects: GA ČR GA406/06/0747 Institutional research plan: CEZ:AV0Z70250504 Keywords : CVD risks * psychological measures * physiological risks Subject RIV: AN - Psychology

  4. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  5. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  6. Validation of measured poleward TEC gradient using multi-station GPS with Artificial Neural Network based TEC model in low latitude region for developing predictive capability of ionospheric scintillation

    Science.gov (United States)

    Sur, D.; Paul, A.

    2017-12-01

    The equatorial ionosphere shows sharp diurnal and latitudinal Total Electron Content (TEC) variations over a major part of the day. Equatorial ionosphere also exhibits intense post-sunset ionospheric irregularities. Accurate prediction of TEC in these low latitudes is not possible from standard ionospheric models. An Artificial Neural Network (ANN) based Vertical TEC (VTEC) model has been designed using TEC data in low latitude Indian longitude sector for accurate prediction of VTEC. GPS TEC data from the stations Calcutta (22.58°N, 88.38°E geographic, magnetic dip 32°), Baharampore (24.09°N, 88.25°E geographic, magnetic dip 35°) and Siliguri (26.72°N, 88.39°E geographic; magnetic dip 40°) are used as training dataset for the duration of January 2007-September 2011. Poleward VTEC gradients from northern EIA crest to region beyond EIA crest have been calculated from measured VTEC and compared with that obtained from ANN based VTEC model. TEC data from Calcutta and Siliguri are used to compute VTEC gradients during April 2013 and August-September 2013. It has been observed that poleward VTEC gradient computed from ANN based TEC model has shown good correlation with measured values during vernal and autumnal equinoxes of high solar activity periods of 2013. Possible correlation between measured poleward TEC gradients and post-sunset scintillations (S4 ≥ 0.4) from northern crest of EIA has been observed in this paper. From the observation, a suitable threshold poleward VTEC gradient has been proposed for possible occurrence of post-sunset scintillations at northern crest of EIA along 88°E longitude. Poleward VTEC gradients obtained from ANN based VTEC model are used to forecast possible ionospheric scintillation after post-sunset period using the threshold value. It has been observed that these predicted VTEC gradients obtained from ANN based VTEC model can forecast post-sunset L-band scintillation with an accuracy of 67% to 82% in this dynamic low latitude

  7. Electrical resistivity measurement to predict uniaxial compressive ...

    Indian Academy of Sciences (India)

    and multiple regression analysis. It was seen that the ... The correlation coefficients are generally higher for the multiple regression models than that .... for each regression. A strong linear relation between UCS and resistivity values was found (figure 2). UCS values increase with increasing resistivity values. The equation of ...

  8. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    Science.gov (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  9. Model plant key measurement points

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The key measurement points for the model low enriched fuel fabrication plant are described as well as the measurement methods. These are the measurement points and methods that are used to complete the plant's formal material balance. The purpose of the session is to enable participants to: (1) understand the basis for each key measurement; and (2) understand the importance of each measurement to the overall plant material balance. The feed to the model low enriched uranium fuel fabrication plant is UF 6 and the product is finished light water reactor fuel assemblies. The waste discards are solid and liquid wastes. The plant inventory consists of unopened UF 6 cylinders, UF 6 heels, fuel assemblies, fuel rods, fuel pellets, UO 2 powder, U 3 O 8 powder, and various scrap materials. At the key measurement points the total plant material balance (flow and inventory) is measured. The two types of key measurement points-flow and inventory are described

  10. Comparison of the corrosion of fasteners embedded in wood measured in outdoor exposure with the predictions from a combined hygrothermal-corrosion model

    Science.gov (United States)

    Samuel L. Zelinka; Samuel V. Glass; Charles R. Boardman; Dominique Derome

    2016-01-01

    This paper examines the accuracy of a recently developed hygrothermal-corrosion model which predictsthe corrosion of fasteners embedded in wood by comparing the results of the model to a one year fieldtest. Steel and galvanized steel fasteners were embedded into untreated and preservative treated woodand exposed outdoors while weather data were collected. Qualitatively...

  11. On a model for the prediction of the friction coefficient in mixed lubrication based on a load-sharing concapt with measured surface roughness

    NARCIS (Netherlands)

    Akchurin, Aydar; Bosman, Rob; Lugt, Pieter Martin; van Drogen, Mark

    2015-01-01

    A new model was developed for the simulation of the friction coefficient in lubricated sliding line contacts. A half-space-based contact algorithm was linked with a numerical elasto-hydrodynamic lubrication solver using the load-sharing concept. The model was compared with an existing asperity-based

  12. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  13. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  14. Transition Models with Measurement Errors

    OpenAIRE

    Magnac, Thierry; Visser, Michael

    1999-01-01

    In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...

  15. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  16. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  17. Seismoelectric fluid/porous-medium interface response model and measurements

    NARCIS (Netherlands)

    Schakel, M.D.; Smeulders, D.M.J.; Slob, E.C.; Heller, H.K.J.

    2011-01-01

    Coupled seismic and electromagnetic (EM) wave effects in fluid-saturated porous media are measured since decades. However, direct comparisons between theoretical seismoelectric wavefields and measurements are scarce. A seismoelectric full-waveform numerical model is developed, which predicts both

  18. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  19. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  20. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  1. Data Quality Enhanced Prediction Model for Massive Plant Data

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon-Ghu [Nuclear Engr. Sejong Univ., Seoul (Korea, Republic of); Kang, Seong-Ki [Monitoring and Diagnosis, Suwon (Korea, Republic of); Shin, Hajin [Saint Paul Preparatory Seoul, Seoul (Korea, Republic of)

    2016-10-15

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function.

  2. Data Quality Enhanced Prediction Model for Massive Plant Data

    International Nuclear Information System (INIS)

    Park, Moon-Ghu; Kang, Seong-Ki; Shin, Hajin

    2016-01-01

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function

  3. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    Czech Academy of Sciences Publication Activity Database

    Ivankina, T. I.; Zel, I. Yu.; Lokajíček, Tomáš; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    712/713, 21 August (2017), s. 82-94 ISSN 0040-1951 R&D Projects: GA MŠk LH13102 Institutional support: RVO:67985831 Keywords : compositional layering * crystallographic texture * effective elastic properties calculation * neutron diffraction * plagioclase-biotite-muscovite (sillimanite) gneiss * velocity measurements Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 2.693, year: 2016

  4. Aerodynamic Temperature Derived from Flux-Profile Measurements and Two-Source Model Predictions over a Cotton Row Crop in an Advective Environment

    Science.gov (United States)

    The surface aerodynamic temperature (SAT) is related to the atmospheric forcing conditions (radiation, wind speed and air temperature) and surface conditions. SAT is required in the bulk surface resistance equation to calculate the rate of sensible heat flux exchange. SAT cannot be measured directly...

  5. Muon polarization in the MEG experiment: predictions and measurements

    International Nuclear Information System (INIS)

    Baldini, A.M.; Dussoni, S.; Galli, L.; Grassi, M.; Sergiampietri, F.; Signorelli, G.; Bao, Y.; Hildebrandt, M.; Kettle, P.R.; Mtchedlishvili, A.; Papa, A.; Ritt, S.; Baracchini, E.; Bemporad, C.; Cei, F.; D'Onofrio, A.; Nicolo, D.; Tenchini, F.; Berg, F.; Hodge, Z.; Rutar, G.; Biasotti, M.; Gatti, F.; Pizzigoni, G.; Boca, G.; De Bari, A.; Cattaneo, P.W.; Rossella, M.; Cavoto, G.; Piredda, G.; Renga, F.; Voena, C.; Chiarello, G.; Panareo, M.; Pepino, A.; Chiri, C.; Grancagnolo, F.; Tassielli, G.F.; De Gerone, M.; Fujii, Y.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Sawada, R.; Uchiyama, Y.; Yoshida, K.; Graziosi, A.; Ripiccini, E.; Grigoriev, D.N.; Haruyama, T.; Mihara, S.; Nishiguchi, H.; Yamamoto, A.; Ieki, K.; Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V.; Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z.; Khomutov, N.; Korenchenko, A.; Kravchuk, N.; Venturini, M.

    2016-01-01

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process μ + → e + γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P μ = -1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P μ =.0.86 ± 0.02 (stat) -0.06 +0.05 (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ + → e + γ search induced by the muon radiative decay: μ + → e + anti ν μ ν e γ. (orig.)

  6. Muon polarization in the MEG experiment: predictions and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Dussoni, S.; Galli, L.; Grassi, M.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Hildebrandt, M.; Kettle, P.R.; Mtchedlishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [University of Tokyo, ICEPP, Tokyo (Japan); INFN, Laboratori Nazionali di Frascati, Rome (Italy); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [INFN Sezione di Pisa, Pisa (Italy); Pisa Univ., Dipartimento di Fisica, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genova (Italy); Genova Univ., Dipartimento di Fisica, Genova (Italy); Boca, G.; De Bari, A. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica, Pavia (Italy); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Renga, F.; Voena, C. [Univ. ' ' Sapienza' ' , Rome (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Univ. del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); Chiri, C.; Grancagnolo, F.; Tassielli, G.F. [Univ. del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); De Gerone, M. [Genova Univ. (Italy); INFN Sezione di Genova, Genova (Italy); Fujii, Y.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [University of Tokyo, ICEPP, Tokyo (Japan); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Ieki, K. [Paul Scherrer Institut PSI, Villigen (Switzerland); University of Tokyo, ICEPP, Tokyo (Japan); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z. [University of California, Irvine, CA (United States); Khomutov, N.; Korenchenko, A.; Kravchuk, N. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Venturini, M. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Scuola Normale Superiore, Pisa (Italy); Collaboration: The MEG Collaboration

    2016-04-15

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process μ{sup +} → e{sup +}γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P{sub μ} = -1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P{sub μ} =.0.86 ± 0.02 (stat){sub -0.06}{sup +0.05} (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ{sup +} → e{sup +}γ search induced by the muon radiative decay: μ{sup +} → e{sup +} anti ν{sub μ}ν{sub e}γ. (orig.)

  7. Prediction and measurement of thermally induced cambial tissue necrosis in tree stems

    Science.gov (United States)

    Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova

    2006-01-01

    A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...

  8. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  9. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  10. Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction

    DEFF Research Database (Denmark)

    Paige, Ellie; Barrett, Jessica; Pennells, Lisa

    2017-01-01

    The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data...... encompassing 1962-2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary...... improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction....

  11. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  12. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  13. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  14. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  15. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  16. Predicting individual variation in language from infant speech perception measures

    NARCIS (Netherlands)

    Christia, A.; Seidl, A.; Junge, C.; Soderstrom, M.; Hagoort, P.

    2014-01-01

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate

  17. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  18. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  19. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  20. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  1. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  2. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  3. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  4. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  5. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  6. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  7. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  8. Age prediction on the basis of brain anatomical measures.

    Science.gov (United States)

    Valizadeh, S A; Hänggi, J; Mérillat, S; Jäncke, L

    2017-02-01

    In this study, we examined whether age can be predicted on the basis of different anatomical features obtained from a large sample of healthy subjects (n = 3,144). From this sample we obtained different anatomical feature sets: (1) 11 larger brain regions (including cortical volume, thickness, area, subcortical volume, cerebellar volume, etc.), (2) 148 cortical compartmental thickness measures, (3) 148 cortical compartmental area measures, (4) 148 cortical compartmental volume measures, and (5) a combination of the above-mentioned measures. With these anatomical feature sets, we predicted age using 6 statistical techniques (multiple linear regression, ridge regression, neural network, k-nearest neighbourhood, support vector machine, and random forest). We obtained very good age prediction accuracies, with the highest accuracy being R 2  = 0.84 (prediction on the basis of a neural network and support vector machine approaches for the entire data set) and the lowest being R 2  = 0.40 (prediction on the basis of a k-nearest neighborhood for cortical surface measures). Interestingly, the easy-to-calculate multiple linear regression approach with the 11 large brain compartments resulted in a very good prediction accuracy (R 2  = 0.73), whereas the application of the neural network approach for this data set revealed very good age prediction accuracy (R 2  = 0.83). Taken together, these results demonstrate that age can be predicted well on the basis of anatomical measures. The neural network approach turned out to be the approach with the best results. In addition, it was evident that good prediction accuracies can be achieved using a small but nevertheless age-representative dataset of brain features. Hum Brain Mapp 38:997-1008, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. EPOS1 - a multiparameter measuring system to earthquake prediction research

    International Nuclear Information System (INIS)

    Streil, T.; Oeser, V.; Heinicke, J.; Koch, U.; Wiegand, J.

    1998-01-01

    The approach to earthquake prediction by geophysical, geochemical and hydrological measurements is a long and winding road. Nevertheless, the results show a progress in that field (e.g. Kobe). This progress is also a result of a new generation of measuring equipment. SARAD has developed a versatile measuring system (EPOS1) based on experiences and recent results from different research groups. It is able to record selected parameters suitable to earthquake prediction research. A micro-computer system handles data exchange, data management and control. It is connected to a modular sensor system. Sensor modules can be selected according to the actual needs at the measuring site. (author)

  10. Prediction of type A behaviour: A structural equation model

    Directory of Open Access Journals (Sweden)

    René van Wyk

    2009-05-01

    Full Text Available The predictability of Type A behaviour was measured in a sample of 375 professionals with a shortened version of the Jenkins Activity Survey (JAS. Two structural equation models were constructed with the Type A behaviour achievement sub-scale and global (total Type A as the predictor variables. The indices showed a reasonable-to-promising fit with the data. Type A achievement was reasonably predicted by service-career orientation, internal locus of control, power self-concept and economic innovation. Type A global was also predicted by internal locus of control, power self-concept and the entrepreneurial attitude of achievement and personal control.

  11. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  12. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  13. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  14. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  15. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  16. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  17. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  18. Electroencephalographic connectivity measures predict learning of a motor sequencing task.

    Science.gov (United States)

    Wu, Jennifer; Knapp, Franziska; Cramer, Steven C; Srinivasan, Ramesh

    2018-02-01

    Individuals vary significantly with respect to rate and degree of improvement with motor practice. While the regions that underlie motor learning have been well described, neurophysiological factors underlying differences in response to motor practice are less well understood. The present study examined both resting-state and event-related EEG coherence measures of connectivity as predictors of response to motor practice on a motor sequencing task using the dominant hand. Thirty-two healthy young right-handed participants underwent resting EEG before motor practice. Response to practice was evaluated both across the single session of motor practice and 24 h later at a retention test of short-term motor learning. Behaviorally, the group demonstrated statistically significant gains both in single-session "motor improvement" and across-session "motor learning." A resting-state measure of whole brain coherence with primary motor cortex (M1) at baseline robustly predicted subsequent motor improvement (validated R 2 = 0.55) and motor learning (validated R 2 = 0.68) in separate partial least-squares regression models. Specifically, greater M1 coherence with left frontal-premotor cortex (PMC) at baseline was characteristic of individuals likely to demonstrate greater gains in both motor improvement and motor learning. Analysis of event-related coherence with respect to movement found the largest changes occurring in areas implicated in planning and preparation of movement, including PMC and frontal cortices. While event-related coherence provided a stronger prediction of practice-induced motor improvement (validated R 2 = 0.73), it did not predict the degree of motor learning (validated R 2 = 0.16). These results indicate that connectivity in the resting state is a better predictor of consolidated learning of motor skills. NEW & NOTEWORTHY Differences in response to motor training have significant societal implications across a lifetime of motor skill practice. By

  19. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  20. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  1. Petrophysical properties of greensand as predicted from NMR measurements

    DEFF Research Database (Denmark)

    Hossain, Zakir; Grattoni, Carlos A.; Solymar, Mikael

    2011-01-01

    ABSTRACT: Nuclear magnetic resonance (NMR) is a useful tool in reservoir evaluation. The objective of this study is to predict petrophysical properties from NMR T2 distributions. A series of laboratory experiments including core analysis, capillary pressure measurements, NMR T2 measurements and i...

  2. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  3. Three-model ensemble wind prediction in southern Italy

    Directory of Open Access Journals (Sweden)

    R. C. Torcasio

    2016-03-01

    Full Text Available Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013 three-model ensemble (TME experiment for wind prediction is considered. The models employed, run operationally at National Research Council – Institute of Atmospheric Sciences and Climate (CNR-ISAC, are RAMS (Regional Atmospheric Modelling System, BOLAM (BOlogna Limited Area Model, and MOLOCH (MOdello LOCale in H coordinates. The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System. Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System of the ECMWF (European Centre for Medium-Range Weather Forecast for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  4. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  5. Effect of length of measurement period on accuracy of predicted annual heating energy consumption of buildings

    International Nuclear Information System (INIS)

    Cho, Sung-Hwan; Kim, Won-Tae; Tae, Choon-Soeb; Zaheeruddin, M.

    2004-01-01

    This study examined the temperature dependent regression models of energy consumption as a function of the length of the measurement period. The methodology applied was to construct linear regression models of daily energy consumption from 1 day to 3 months data sets and compare the annual heating energy consumption predicted by these models with actual annual heating energy consumption. A commercial building in Daejon was selected, and the energy consumption was measured over a heating season. The results from the investigation show that the predicted energy consumption based on 1 day of measurements to build the regression model could lead to errors of 100% or more. The prediction error decreased to 30% when 1 week of data was used to build the regression model. Likewise, the regression model based on 3 months of measured data predicted the annual energy consumption within 6% of the measured energy consumption. These analyses show that the length of the measurement period has a significant impact on the accuracy of the predicted annual energy consumption of buildings

  6. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  7. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  8. A deep auto-encoder model for gene expression prediction.

    Science.gov (United States)

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  9. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  10. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  11. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  12. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  13. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  14. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  15. Aerosol behaviour modeling and measurements

    International Nuclear Information System (INIS)

    Gieseke, J.A.; Reed, L.D.

    1977-01-01

    Aerosol behavior within Liquid Metal Fast Breeder Reactor (LMFBR) containments is of critical importance since most of the radioactive species are expected to be associated with particulate forms and the mass of radiologically significant material leaked to the ambient atmosphere is directly related to the aerosol concentration airborne within the containment. Mathematical models describing the behavior of aerosols in closed environments, besides providing a direct means of assessing the importance of specific assumptions regarding accident sequences, will also serve as the basic tool with which to predict the consequences of various postulated accident situations. Consequently, considerable efforts have been recently directed toward the development of accurate and physically realistic theoretical aerosol behavior models. These models have accounted for various mechanisms affecting agglomeration rates of airborne particulate matter as well as particle removal rates from closed systems. In all cases, spatial variations within containments have been neglected and a well-mixed control volume has been assumed. Examples of existing computer codes formulated from the mathematical aerosol behavior models are the Brookhaven National Laboratory TRAP code, the PARDISEKO-II and PARDISEKO-III codes developed at Karlsruhe Nuclear Research Center, and the HAA-2, HAA-3, and HAA-3B codes developed by Atomics International. Because of their attractive short computation times, the HAA-3 and HAA-3B codes have been used extensively for safety analyses and are attractive candidates with which to demonstrate order of magnitude estimates of the effects of various physical assumptions. Therefore, the HAA-3B code was used as the nucleus upon which changes have been made to account for various physical mechanisms which are expected to be present in postulated accident situations and the latest of the resulting codes has been termed the HAARM-2 code. It is the primary purpose of the HAARM

  16. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  17. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  18. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  19. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  20. Modeling and Prediction of Soil Water Vapor Sorption Isotherms

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per

    2015-01-01

    Soil water vapor sorption isotherms describe the relationship between water activity (aw) and moisture content along adsorption and desorption paths. The isotherms are important for modeling numerous soil processes and are also used to estimate several soil (specific surface area, clay content.......93) for a wide range of soils; and (ii) develop and test regression models for estimating the isotherms from clay content. Preliminary results show reasonable fits of the majority of the investigated empirical and theoretical models to the measured data although some models were not capable to fit both sorption...... directions accurately. Evaluation of the developed prediction equations showed good estimation of the sorption/desorption isotherms for tested soils....

  1. Modelling personality, plasticity and predictability in shelter dogs

    Science.gov (United States)

    2017-01-01

    Behavioural assessments of shelter dogs (Canis lupus familiaris) typically comprise standardized test batteries conducted at one time point, but test batteries have shown inconsistent predictive validity. Longitudinal behavioural assessments offer an alternative. We modelled longitudinal observational data on shelter dog behaviour using the framework of behavioural reaction norms, partitioning variance into personality (i.e. inter-individual differences in behaviour), plasticity (i.e. inter-individual differences in average behaviour) and predictability (i.e. individual differences in residual intra-individual variation). We analysed data on interactions of 3263 dogs (n = 19 281) with unfamiliar people during their first month after arrival at the shelter. Accounting for personality, plasticity (linear and quadratic trends) and predictability improved the predictive accuracy of the analyses compared to models quantifying personality and/or plasticity only. While dogs were, on average, highly sociable with unfamiliar people and sociability increased over days since arrival, group averages were unrepresentative of all dogs and predictions made at the individual level entailed considerable uncertainty. Effects of demographic variables (e.g. age) on personality, plasticity and predictability were observed. Behavioural repeatability was higher one week after arrival compared to arrival day. Our results highlight the value of longitudinal assessments on shelter dogs and identify measures that could improve the predictive validity of behavioural assessments in shelters. PMID:28989764

  2. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires.

    Science.gov (United States)

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas

    2016-05-01

    We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.

  3. Frequency weighted model predictive control of wind turbine

    DEFF Research Database (Denmark)

    Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood

    2013-01-01

    This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...... accordingly. In practice is very hard to measure the effective wind speed, this quantity will be estimated using measurements from the turbine itself. For this purpose stationary predictive Kalman filter has been used. Stochastic simulations of the wind turbine behaviour with applied frequency weighted model...

  4. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  5. Predictive modeling of mosquito abundance and dengue transmission in Kenya

    Science.gov (United States)

    Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.

    2017-12-01

    Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.

  6. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  7. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  8. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  9. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  10. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  11. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  12. Third trimester ultrasound soft-tissue measurements accurately predicts macrosomia.

    Science.gov (United States)

    Maruotti, Giuseppe Maria; Saccone, Gabriele; Martinelli, Pasquale

    2017-04-01

    To evaluate the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. Electronic databases were searched from their inception until September 2015 with no limit for language. We included only studies assessing the accuracy of sonographic measurements of fetal soft tissue in the abdomen or thigh in the prediction of macrosomia  ≥34 weeks of gestation. The primary outcome was the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. We generated the forest plot for the pooled sensitivity and specificity with 95% confidence interval (CI). Additionally, summary receiver-operating characteristics (ROC) curves were plotted and the area under the curve (AUC) was also computed to evaluate the overall performance of the diagnostic test accuracy. Three studies, including 287 singleton gestations, were analyzed. The pooled sensitivity of sonographic measurements of abdominal or thigh fetal soft tissue in the prediction of macrosomia was 80% (95% CI: 66-89%) and the pooled specificity was 95% (95% CI: 91-97%). The AUC for diagnostic accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia was 0.92 and suggested high diagnostic accuracy. Third-trimester sonographic measurements of fetal soft tissue after 34 weeks may help to detect macrosomia with a high degree of accuracy. The pooled detection rate was 80%. A standardization of measurements criteria, reproducibility, building reference charts of fetal subcutaneous tissue and large studies to assess the optimal cutoff of fetal adipose thickness are necessary before the introduction of fetal soft-tissue markers in the clinical practice.

  13. Evaluation of Deep Learning Models for Predicting CO2 Flux

    Science.gov (United States)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  14. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  15. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  16. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  17. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  18. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  19. Predicting beef carcass retail products of Mediterranean buffaloes by real-time ultrasound measures

    Directory of Open Access Journals (Sweden)

    R. De Castro Mourão

    2010-02-01

    Full Text Available Twenty eight Mediterranean buffaloes bulls were scanned with real-time ultrasound (RTU, slaughtered, and fabricated into retail cuts to determine the potential for ultrasound measures to predict carcass retail yield. Ultrasound measures of fat thickness, ribeye area and rump fat thickness were recorded three to five days prior to slaughter. Carcass measurements were taken, and one side of each carcass was fabricated into retail cuts. Stepwise regression analysis was used to compare possible models for prediction of either kilograms or percent retail product from carcass mesaurements and ultrasound measures. Results indicate that possible prediction models for percent or kilograms of retail products using RTU measures were similar in their predictive power and accuracy when compared to models derived from carcass measurements. Both fat thickness and ribeye area were over-predicted when measured ultrasonically compared to measurements taken on the carcass in the cooler. The mean absolute differences for both traits are larger than the mean differences, indicating that some images were interpreted to be larger and some smaller than actual carcass measurements. Ultrasound measurements of REA and FT had positive correlations with carcass measures of the same traits (r=.96 for REA and r=.99 for FT. Standard errors of prediction currently are being used as the standard to certify ultrasound technicians for accuracy. Regression equations using live weight (LW, rib eye area (REAU and subcutaneous fat thickness (FTU between 12th and 13 th ribs and also over the biceps femoris muscle (FTP8 by ultrasound explained 95% of the variation in the hot carcass weight when measure immediately before slaughter.

  20. Prediction of Wine Sensorial Quality by Routinely Measured Chemical Properties

    Directory of Open Access Journals (Sweden)

    Bednárová Adriána

    2014-12-01

    Full Text Available The determination of the sensorial quality of wines is of great interest for wine consumers and producers since it declares the quality in most of the cases. The sensorial assays carried out by a group of experts are time-consuming and expensive especially when dealing with large batches of wines. Therefore, an attempt was made to assess the possibility of estimating the wine sensorial quality with using routinely measured chemical descriptors as predictors. For this purpose, 131 Slovenian red wine samples of different varieties and years of production were analysed and correlation and principal component analysis were applied to find inter-relations between the studied oenological descriptors. The method of artificial neural networks (ANNs was utilised as the prediction tool for estimating overall sensorial quality of red wines. Each model was rigorously validated and sensitivity analysis was applied as a method for selecting the most important predictors. Consequently, acceptable results were obtained, when data representing only one year of production were included in the analysis. In this case, the coefficient of determination (R2 associated with training data was 0.95 and that for validation data was 0.90. When estimating sensorial quality in categorical form, 94 % and 85 % of correctly classified samples were achieved for training and validation subset, respectively.

  1. Agreement between measured height, and height predicted from ...

    African Journals Online (AJOL)

    accurate predictor of height in forensic science, but cannot be directly measured in living patients.32 In a recent study in a public hospital in Brazil, it was indeed found that height prediction equations based on knee height outperformed those based on. Discrepancies of up to 19.8 cm were recorded, which is clinically.

  2. Accuracy of Mandibular Rami Measurements in Prediction of Sex ...

    African Journals Online (AJOL)

    Materials and Methods: A cross-sectional, observational study was carriedout using 500 digital orthopantomographs (OPGs) with five rami measurements taken for each radiograph in the South Indian population. The determination of sex was done by discriminant function analysis with a prediction accuracy being 84.1% ...

  3. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  4. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  5. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  6. Application of prediction of equilibrium to servo-controlled calorimetry measurements

    International Nuclear Information System (INIS)

    Mayer, R.L. II

    1987-01-01

    Research was performed to develop an endpoint prediction algorithm for use with calorimeters operating in the digital servo-controlled mode. The purpose of this work was to reduce calorimetry measurement times while maintaining the high degree of precision and low bias expected from calorimetry measurements. Data from routine operation of two calorimeters were used to test predictive models at each stage of development against time savings, precision, and robustness criteria. The results of the study indicated that calorimetry measurement times can be significantly reduced using this technique. The time savings is, however, dependent on parameters in the digital servo-control algorithm and on packaging characteristics of measured items

  7. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  8. Clinical and epidemiological round: Approach to clinical prediction models

    Directory of Open Access Journals (Sweden)

    Isaza-Jaramillo, Sandra

    2017-01-01

    Full Text Available Research related to prognosis can be classified as follows: fundamental, which shows differences in health outcomes; prognostic factors, which identifies and characterizes variables; development, validation and impact of predictive models; and finally, stratified medicine, to establish groups that share a risk factor associated with the outcome of interest. The outcome of a person regarding health or disease status can be predicted considering certain characteristics associated, before or simultaneously, with that outcome. This can be done by means of prognostic or diagnostic predictive models. The development of a predictive model requires to be careful in the selection, definition, measurement and categorization of predictor variables; in the exploration of interactions; in the number of variables to be included; in the calculation of sample size; in the handling of lost data; in the statistical tests to be used, and in the presentation of the model. The model thus developed must be validated in a different group of patients to establish its calibration, discrimination and usefulness.

  9. Modelling the electrical properties of concrete for shielding effectiveness prediction

    International Nuclear Information System (INIS)

    Sandrolini, L; Reggiani, U; Ogunsola, A

    2007-01-01

    Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed

  10. Modelling the electrical properties of concrete for shielding effectiveness prediction

    Science.gov (United States)

    Sandrolini, L.; Reggiani, U.; Ogunsola, A.

    2007-09-01

    Concrete is a porous, heterogeneous material whose abundant use in numerous applications demands a detailed understanding of its electrical properties. Besides experimental measurements, material theoretical models can be useful to investigate its behaviour with respect to frequency, moisture content or other factors. These models can be used in electromagnetic compatibility (EMC) to predict the shielding effectiveness of a concrete structure against external electromagnetic waves. This paper presents the development of a dispersive material model for concrete out of experimental measurement data to take account of the frequency dependence of concrete's electrical properties. The model is implemented into a numerical simulator and compared with the classical transmission-line approach in shielding effectiveness calculations of simple concrete walls of different moisture content. The comparative results show good agreement in all cases; a possible relation between shielding effectiveness and the electrical properties of concrete and the limits of the proposed model are discussed.

  11. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  12. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  13. Use of Information Measures and Their Approximations to Detect Predictive Gene-Gene Interaction

    Directory of Open Access Journals (Sweden)

    Jan Mielniczuk

    2017-01-01

    Full Text Available We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.

  14. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  15. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    DEFF Research Database (Denmark)

    Di Angelantonio, Emanuele; Gao, Pei; Khan, Hassan

    2014-01-01

    of cardiovascular disease (CVD) risk. DESIGN, SETTING, AND PARTICIPANTS: Analysis of individual-participant data available from 73 prospective studies involving 294,998 participants without a known history of diabetes mellitus or CVD at the baseline assessment. MAIN OUTCOMES AND MEASURES: Measures of risk......,840 incident fatal and nonfatal CVD outcomes (13,237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values......IMPORTANCE: The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE: To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction...

  16. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires

    DEFF Research Database (Denmark)

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik

    2016-01-01

    responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least......OBJECTIVES: We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. METHODS: Two-hundred-and-fourteen blue-collar workers......-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. RESULTS: A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary...

  17. Prediction of Human Glomerular Filtration Rate from Preterm Neonates to Adults: Evaluation of Predictive Performance of Several Empirical Models.

    Science.gov (United States)

    Mahmood, Iftekhar; Staschen, Carl-Michael

    2016-03-01

    The objective of this study was to evaluate the predictive performance of several allometric empirical models (body weight dependent, age dependent, fixed exponent 0.75, a data-dependent single exponent, and maturation models) to predict glomerular filtration rate (GFR) in preterm and term neonates, infants, children, and adults without any renal disease. In this analysis, the models were developed from GFR data obtained from inulin clearance (preterm neonates to adults; n = 93) and the predictive performance of these models were evaluated in 335 subjects (preterm neonates to adults). The primary end point was the prediction of GFR from the empirical allometric models and the comparison of the predicted GFR with measured GFR. A prediction error within ±30% was considered acceptable. Overall, the predictive performance of the four models (BDE, ADE, and two maturation models) for the prediction of mean GFR was good across all age groups but the prediction of GFR in individual healthy subjects especially in neonates and infants was erratic and may be clinically unacceptable.

  18. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  19. Models to predict the start of the airborne pollen season

    Science.gov (United States)

    Siniscalco, Consolata; Caramiello, Rosanna; Migliavacca, Mirco; Busetto, Lorenzo; Mercalli, Luca; Colombo, Roberto; Richardson, Andrew D.

    2015-07-01

    Aerobiological data can be used as indirect but reliable measures of flowering phenology to analyze the response of plant species to ongoing climate changes. The aims of this study are to evaluate the performance of several phenological models for predicting the pollen start of season (PSS) in seven spring-flowering trees ( Alnus glutinosa, Acer negundo, Carpinus betulus, Platanus occidentalis, Juglans nigra, Alnus viridis, and Castanea sativa) and in two summer-flowering herbaceous species ( Artemisia vulgaris and Ambrosia artemisiifolia) by using a 26-year aerobiological data set collected in Turin (Northern Italy). Data showed a reduced interannual variability of the PSS in the summer-flowering species compared to the spring-flowering ones. Spring warming models with photoperiod limitation performed best for the greater majority of the studied species, while chilling class models were selected only for the early spring flowering species. For Ambrosia and Artemisia, spring warming models were also selected as the best models, indicating that temperature sums are positively related to flowering. However, the poor variance explained by the models suggests that further analyses have to be carried out in order to develop better models for predicting the PSS in these two species. Modeling the pollen season start on a very wide data set provided a new opportunity to highlight the limits of models in elucidating the environmental factors driving the pollen season start when some factors are always fulfilled, as chilling or photoperiod or when the variance is very poor and is not explained by the models.

  20. Step Prediction During Perturbed Standing Using Center Of Pressure Measurements

    Directory of Open Access Journals (Sweden)

    Milos R. Popovic

    2007-04-01

    Full Text Available The development of a sensor that can measure balance during quiet standing and predict stepping response in the event of perturbation has many clinically relevant applica- tions, including closed-loop control of a neuroprothesis for standing. This study investigated the feasibility of an algorithm that can predict in real-time when an able-bodied individual who is quietly standing will have to make a step to compensate for an external perturbation. Anterior and posterior perturbations were performed on 16 able-bodied subjects using a pul- ley system with a dropped weight. A linear relationship was found between the peak center of pressure (COP velocity and the peak COP displacement caused by the perturbation. This result suggests that one can predict when a person will have to make a step based on COP velocity measurements alone. Another important feature of this finding is that the peak COP velocity occurs considerably before the peak COP displacement. As a result, one can predict if a subject will have to make a step in response to a perturbation sufficiently ahead of the time when the subject is actually forced to make the step. The proposed instability detection algorithm will be implemented in a sensor system using insole sheets in shoes with minitur- ized pressure sensors by which the COPv can be continuously measured. The sensor system will be integrated in a closed-loop feedback system with a neuroprosthesis for standing in the near future.

  1. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  2. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... specifications. The model is then scaled to the desired anthropometric data by means of one of the existing scaling law in AMS. If the simulation results are to be compared with the experimental measurements, the model should be scaled to match the involved subjects. Depending on the scientific question...

  3. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    Science.gov (United States)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  4. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    Science.gov (United States)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  5. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  6. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  7. Magnetic measurement of creep damage: modeling and measurement

    Science.gov (United States)

    Sablik, Martin J.; Jiles, David C.

    1996-11-01

    Results of inspection of creep damage by magnetic hysteresis measurements on Cr-Mo steel are presented. It is shown that structure-sensitive parameters such as coercivity, remanence and hysteresis loss are sensitive to creep damage. Previous metallurgical studies have shown that creep changes the microstructure of he material by introducing voids, dislocations, and grain boundary cavities. As cavities develop, dislocations and voids move out to grain boundaries; therefore, the total pinning sources for domain wall motion are reduced.This, together with the introduction of a demagnetizing field due to the cavities, results in the decrease of both coercivity, remanence and hence, concomitantly, hysteresis loss. Incorporating these structural effects into a magnetomechanical hysteresis model developed previously by us produces numerical variations of coercivity, remanence and hysteresis loss consistent with what is measured. The magnetic model has therefore been used to obtain appropriately modified magnetization curves for each element of creep-damaged material in a finite element (FE) calculation. The FE calculation has been used to simulate magnetic detection of non-uniform creep damage around a seam weld in a 2.25 Cr 1Mo steam pipe. In particular, in the simulation, a magnetic C-core with primary and secondary coils was placed with its pole pieces flush against the specimen in the vicinity of the weld. The secondary emf was shown to be reduced when creep damage was present inside the pipe wall at the cusp of the weld and in the vicinity of the cusp. The calculation showed that the C- core detected creep damage best if it spanned the weld seam width and if the current in the primary was such that the C- core was not magnetically saturated. Experimental measurements also exhibited the dip predicted in emf, but the measurements are not yet conclusive because the effects of magnetic property changes of weld materials, heat- affected material, and base material have

  8. [Hyperspectrum based prediction model for nitrogen content of apple flowers].

    Science.gov (United States)

    Zhu, Xi-Cun; Zhao, Geng-Xing; Wang, Ling; Dong, Fang; Lei, Tong; Zhan, Bing

    2010-02-01

    The present paper aims to quantitatively retrieve nitrogen content in apple flowers, so as to provide an important basis for apple informationization management. By using ASD FieldSpec 3 field spectrometer, hyperspectral reflectivity of 120 apple flower samples in full-bloom stage was measured and their nitrogen contents were analyzed. Based on the apple flower original spectrum and first derivative spectral characteristics, correlation analysis was carried out between apple flowers original spectrum and first derivative spectrum reflectivity and nitrogen contents, so as to determine the sensitive bands. Based on characteristic spectral parameters, prediction models were built, optimized and tested. The results indicated that the nitrogen content of apple was very significantly negatively correlated with the original spectral reflectance in the 374-696, 1 340-1 890 and 2 052-2 433 nm, while in 736-913 nm they were very significantly positively correlated; the first derivative spectrum in 637-675 nm was very significantly negatively correlated, and in 676-746 nm was very significantly positively correlated. All the six spectral parameters established were significantly correlated with the nitrogen content of apple flowers. Through further comparison and selection, the prediction models built with original spectral reflectance of 640 and 676 nm were determined as the best for nitrogen content prediction of apple flowers. The test results showed that the coefficients of determination (R2) of the two models were 0.825 8 and 0.893 6, the total root mean square errors (RMSE) were 0.732 and 0.638 6, and the slopes were 0.836 1 and 1.019 2 respectively. Therefore the models produced desired results for nitrogen content prediction of apple flowers with average prediction accuracy of 92.9% and 94.0%. This study will provide theoretical basis and technical support for rapid apple flower nitrogen content prediction and nutrition diagnosis.

  9. Modeling a Predictive Energy Equation Specific for Maintenance Hemodialysis.

    Science.gov (United States)

    Byham-Gray, Laura D; Parrott, J Scott; Peters, Emily N; Fogerite, Susan Gould; Hand, Rosa K; Ahrens, Sean; Marcus, Andrea Fleisch; Fiutem, Justin J

    2017-03-01

    Hypermetabolism is theorized in patients diagnosed with chronic kidney disease who are receiving maintenance hemodialysis (MHD). We aimed to distinguish key disease-specific determinants of resting energy expenditure to create a predictive energy equation that more precisely establishes energy needs with the intent of preventing protein-energy wasting. For this 3-year multisite cross-sectional study (N = 116), eligible participants were diagnosed with chronic kidney disease and were receiving MHD for at least 3 months. Predictors for the model included weight, sex, age, C-reactive protein (CRP), glycosylated hemoglobin, and serum creatinine. The outcome variable was measured resting energy expenditure (mREE). Regression modeling was used to generate predictive formulas and Bland-Altman analyses to evaluate accuracy. The majority were male (60.3%), black (81.0%), and non-Hispanic (76.7%), and 23% were ≥65 years old. After screening for multicollinearity, the best predictive model of mREE ( R 2 = 0.67) included weight, age, sex, and CRP. Two alternative models with acceptable predictability ( R 2 = 0.66) were derived with glycosylated hemoglobin or serum creatinine. Based on Bland-Altman analyses, the maintenance hemodialysis equation that included CRP had the best precision, with the highest proportion of participants' predicted energy expenditure classified as accurate (61.2%) and with the lowest number of individuals with underestimation or overestimation. This study confirms disease-specific factors as key determinants of mREE in patients on MHD and provides a preliminary predictive energy equation. Further prospective research is necessary to test the reliability and validity of this equation across diverse populations of patients who are receiving MHD.

  10. Novel genetic markers improve measures of atrial fibrillation risk prediction

    Science.gov (United States)

    Everett, Brendan M.; Cook, Nancy R.; Conen, David; Chasman, Daniel I.; Ridker, Paul M; Albert, Christine M.

    2013-01-01

    Aims Atrial fibrillation (AF) is associated with adverse outcome. Whether recently discovered genetic risk markers improve AF risk prediction is unknown. Methods and results We derived and validated a novel AF risk prediction model from 32 possible predictors in the Women's Health Study (WHS), a cohort of 20 822 women without cardiovascular disease (CVD) at baseline followed prospectively for incident AF (median: 14.5 years). We then created a genetic risk score (GRS) comprised of 12 risk alleles in nine loci and assessed model performance in the validation cohort with and without the GRS. The newly derived WHS AF risk algorithm included terms for age, weight, height, systolic blood pressure, alcohol use, and smoking (current and past). In the validation cohort, this model was well calibrated with good discrimination [C-index (95% CI) = 0.718 (0.684–0.753)] and improved all reclassification indices when compared with age alone. The addition of the genetic score to the WHS AF risk algorithm model improved the C-index [0.741 (0.709–0.774); P = 0.001], the category-less net reclassification [0.490 (0.301–0.670); P < 0.0001], and the integrated discrimination improvement [0.00526 (0.0033–0.0076); P < 0.0001]. However, there was no improvement in net reclassification into 10-year risk categories of <1, 1–5, and 5+% [0.041 (−0.044–0.12); P = 0.33]. Conclusion Among women without CVD, a simple risk prediction model utilizing readily available risk markers identified women at higher risk for AF. The addition of genetic information resulted in modest improvements in predictive accuracy that did not translate into improved reclassification into discrete AF risk categories. PMID:23444395

  11. Software Measurement and Defect Prediction with Depress Extensible Framework

    Directory of Open Access Journals (Sweden)

    Madeyski Lech

    2014-12-01

    Full Text Available Context. Software data collection precedes analysis which, in turn, requires data science related skills. Software defect prediction is hardly used in industrial projects as a quality assurance and cost reduction mean. Objectives. There are many studies and several tools which help in various data analysis tasks but there is still neither an open source tool nor standardized approach. Results. We developed Defect Prediction for software systems (DePress, which is an extensible software measurement, and data integration framework which can be used for prediction purposes (e.g. defect prediction, effort prediction and software changes analysis (e.g. release notes, bug statistics, commits quality. DePress is based on the KNIME project and allows building workflows in a graphic, end-user friendly manner. Conclusions. We present main concepts, as well as the development state of the DePress framework. The results show that DePress can be used in Open Source, as well as in industrial project analysis.

  12. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  13. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  14. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  15. Parathyroid Hormone Measurement in Prediction of Hypocalcaemia following Thyroidectomy

    International Nuclear Information System (INIS)

    Mehrvarz, S.; Mohebbi, H. A.; Motamedi, M. H. K.; Khatami, S. M.; Reazie, R.; Rasouli, H. R.

    2014-01-01

    Objective: To determine the risk of postthyroidectomy hypocalcaemia by measuring parathyroid hormone (PTH) level after thyroidectomy. Study Design: Cross-sectional study. Place and Duration of Study: Baqiyatallah Hospital, Tehran, Iran, from March 2008 to July 2010. Methodology: All included patients were referred for total or near bilateral thyroidectomy. Serum Calcium (Ca) and PTH levels were measured before and 24 hours after surgery. In low Ca cases or development of hypocalcaemia symptoms, daily monitoring of Ca levels were continued. Data were analyzed using SPSS 20 software (SPSS, Chicago, IL, USA). A p-value less than 0.05 were considered statistically significant. To assess the standard value of useful predictive factors, we used receiver operating characteristic (ROC) curves. Results: Of total 99 patients who underwent bilateral thyroidectomy, 47 patients (47.5%) developed hypocalcaemia, out of them, 12 (25.5%) became symptomatic while 2 patients developed permanent hypoparathyroidism. After surgery, mean rank of PTH level within the normocalcaemic and hypocalcaemic patients was 55.34 and 44.1 respectively, p=0.052. Twenty four hours after surgery, 62% drop in PTH was associated with 83.3% of symptomatic hypocalcaemic. For diagnosis of symptomatic hypocalcaemia, 62% PTH drop had sensitivity and specificity were 83.3% and 90.80%. The area under the ROC curve for the PTH postoperative and PTH drop for diagnostic symptomatic hypocalcaemia were 0.835 and 0.873 respectively. Conclusion: Measuring PTH levels after 24 hours postthyroidectomy is not reliable factor for predicting hypocalcaemia itself. For predicting the risk of hypocalcaemia after thyroidectomy it is more reliable to measure the serum PTH level before and after operation and compare the reduction level of percentage of PTH drop for predicting the risk of hypocalcaemia. (author)

  16. Short ECG segments predict defibrillation outcome using quantitative waveform measures.

    Science.gov (United States)

    Coult, Jason; Sherman, Lawrence; Kwok, Heemun; Blackwood, Jennifer; Kudenchuk, Peter J; Rea, Thomas D

    2016-12-01

    Quantitative waveform measures of the ventricular fibrillation (VF) electrocardiogram (ECG) predict defibrillation outcome. Calculation requires an ECG epoch without chest compression artifact. However, pauses in CPR can adversely affect survival. Thus the potential use of waveform measures is limited by the need to pause CPR. We sought to characterize the relationship between the length of the CPR-free epoch and the ability to predict outcome. We conducted a retrospective investigation using the CPR-free ECG prior to first shock among out-of-hospital VF cardiac arrest patients in a large metropolitan region (n=442). Amplitude Spectrum Area (AMSA) and Median Slope (MS) were calculated using ECG epochs ranging from 5s to 0.2s. The relative ability of the measures to predict return of organized rhythm (ROR) and neurologically-intact survival was evaluated at different epoch lengths by calculating the area under the receiver operating characteristic curve (AUC) using the 5-s epoch as the referent group. Compared to the 5-s epoch, AMSA performance declined significantly only after reducing epoch length to 0.2s for ROR (AUC 0.77-0.74, p=0.03) and with epochs of ≤0.6s for neurologically-intact survival (AUC 0.72-0.70, p=0.04). MS performance declined significantly with epochs of ≤0.8s for ROR (AUC 0.78-0.77, p=0.04) and with epochs ≤1.6s for neurologically-intact survival (AUC 0.72-0.71, p=0.04). Waveform measures predict defibrillation outcome using very brief ECG epochs, a quality that may enable their use in current resuscitation algorithms designed to limit CPR interruption. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  18. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  19. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  20. Predicting target displacements using ultrasound elastography and finite element modeling.

    Science.gov (United States)

    op den Buijs, Jorn; Hansen, Hendrik H G; Lopata, Richard G P; de Korte, Chris L; Misra, Sarthak

    2011-11-01

    Soft tissue displacements during minimally invasive surgical procedures may cause target motion and subsequent misplacement of the surgical tool. A technique is presented to predict target displacements using a combination of ultrasound elastography and finite element (FE) modeling. A cubic gelatin/agar phantom with stiff targets was manufactured to obtain pre- and post-loading ultrasound radio frequency (RF) data from a linear array transducer. The RF data were used to compute displacement and strain images, from which the distribution of elasticity was reconstructed using an inverse FE-based approach. The FE model was subsequently used to predict target displacements upon application of different boundary and loading conditions to the phantom. The influence of geometry was investigated by application of the technique to a breast-shaped phantom. The distribution of elasticity in the phantoms as determined from the strain distribution agreed well with results from mechanical testing. Upon application of different boundary and loading conditions to the cubic phantom, the FE model-predicted target motion were consistent with ultrasound measurements. The FE-based approach could also accurately predict the displacement of the target upon compression and indentation of the breast-shaped phantom. This study provides experimental evidence that organ geometry and boundary conditions surrounding the organ are important factors influencing target motion. In future work, the technique presented in this paper could be used for preoperative planning of minimally invasive surgical interventions.

  1. Effect of misreported family history on Mendelian mutation prediction models.

    Science.gov (United States)

    Katki, Hormuzd A

    2006-06-01

    People with familial history of disease often consult with genetic counselors about their chance of carrying mutations that increase disease risk. To aid them, genetic counselors use Mendelian models that predict whether the person carries deleterious mutations based on their reported family history. Such models rely on accurate reporting of each member's diagnosis and age of diagnosis, but this information may be inaccurate. Commonly encountered errors in family history can significantly distort predictions, and thus can alter the clinical management of people undergoing counseling, screening, or genetic testing. We derive general results about the distortion in the carrier probability estimate caused by misreported diagnoses in relatives. We show that the Bayes factor that channels all family history information has a convenient and intuitive interpretation. We focus on the ratio of the carrier odds given correct diagnosis versus given misreported diagnosis to measure the impact of errors. We derive the general form of this ratio and approximate it in realistic cases. Misreported age of diagnosis usually causes less distortion than misreported diagnosis. This is the first systematic quantitative assessment of the effect of misreported family history on mutation prediction. We apply the results to the BRCAPRO model, which predicts the risk of carrying a mutation in the breast and ovarian cancer genes BRCA1 and BRCA2.

  2. Classification models for the prediction of clinicians' information needs.

    Science.gov (United States)

    Del Fiol, Guilherme; Haug, Peter J

    2009-02-01

    Clinicians face numerous information needs during patient care activities and most of these needs are not met. Infobuttons are information retrieval tools that help clinicians to fulfill their information needs by providing links to on-line health information resources from within an electronic medical record (EMR) system. The aim of this study was to produce classification models based on medication infobutton usage data to predict the medication-related content topics (e.g., dose, adverse effects, drug interactions, patient education) that a clinician is most likely to choose while entering medication orders in a particular clinical context. We prepared a dataset with 3078 infobutton sessions and 26 attributes describing characteristics of the user, the medication, and the patient. In these sessions, users selected one out of eight content topics. Automatic attribute selection methods were then applied to the dataset to eliminate redundant and useless attributes. The reduced dataset was used to produce nine classification models from a set of state-of-the-art machine learning algorithms. Finally, the performance of the models was measured and compared. Area under the ROC curve (AUC) and agreement (kappa) between the content topics predicted by the models and those chosen by clinicians in each infobutton session. The performance of the models ranged from 0.49 to 0.56 (kappa). The AUC of the best model ranged from 0.73 to 0.99. The best performance was achieved when predicting choice of the adult dose, pediatric dose, patient education, and pregnancy category content topics. The results suggest that classification models based on infobutton usage data are a promising method for the prediction of content topics that a clinician would choose to answer patient care questions while using an EMR system.

  3. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  4. A multifocal electroretinogram model predicting the development of diabetic retinopathy.

    Science.gov (United States)

    Bearse, Marcus A; Adams, Anthony J; Han, Ying; Schneck, Marilyn E; Ng, Jason; Bronson-Castain, Kevin; Barez, Shirin

    2006-09-01

    The prevalence of diabetes has been accelerating at an alarming rate in the last decade; some describe it as an epidemic. Diabetic eye complications are the leading cause of blindness in adults aged 25-74 in the United States. Early diagnosis and development of effective preventatives and treatments of diabetic retinopathy are essential to save sight. We describe efforts to establish functional indicators of retinal health and predictors of diabetic retinopathy. These indicators and predictors will be needed as markers of the efficacy of new therapies. Clinical trials aimed at either prevention or early treatments will rely heavily on the discovery of sensitive methods to identify patients and retinal locations at risk, as well as to evaluate treatment effects. We report on recent success in revealing local functional changes of the retina with the multifocal electroretinogram (mfERG). This objective measure allows the simultaneous recording of responses from over 100 small retinal patches across the central 45 degrees field. We describe the sensitivity of mfERG implicit time measurement for revealing functional alterations of the retina in diabetes, the local correspondence between functional (mfERG) and structural (vascular) abnormalities in eyes with early nonproliferative retinopathy, and longitudinal studies to formulate models to predict the retinal sites of future retinopathic signs. A multivariate model including mfERG implicit time delays and 'person' risk factors achieved 86% sensitivity and 84% specificity for prediction of new retinopathy development over one year at specific locations in eyes with some retinopathy at baseline. A preliminary test of the model yielded very positive results. This model appears to be the first to predict, quantitatively, the retinal locations of new nonproliferative diabetic retinopathy development over a one-year period. In a separate study, the predictive power of a model was assessed over one- and two-year follow

  5. Validation of Water Erosion Prediction Project (WEPP) model for low-volume forest roads

    Science.gov (United States)

    William Elliot; R. B. Foltz; Charlie Luce

    1995-01-01

    Erosion rates of recently graded nongravel forest roads were measured under rainfall simulation on five different soils. The erosion rates observed on 24 forest road erosion plots were compared with values predicted by the Water Erosion Prediction Project (WEPP) Model, Version 93.1. Hydraulic conductivity and soil erodibility values were predicted from methods...

  6. Measuring Collective Efficacy: A Multilevel Measurement Model for Nested Data

    Science.gov (United States)

    Matsueda, Ross L.; Drakulich, Kevin M.

    2016-01-01

    This article specifies a multilevel measurement model for survey response when data are nested. The model includes a test-retest model of reliability, a confirmatory factor model of inter-item reliability with item-specific bias effects, an individual-level model of the biasing effects due to respondent characteristics, and a neighborhood-level…

  7. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  8. Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.

    Science.gov (United States)

    Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y

    2018-04-01

    We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.

  9. Decay heat measurements and predictions of BWR spent fuel

    International Nuclear Information System (INIS)

    McKinnon, M.A.; Heeb, C.M.; Creer, A.M.

    1986-06-01

    Pre-calorimetry decay heat predictions obtained with the ORIGEN2 computer code were compared to calorimeter data obtained for eleven boiling water reactor (BWR) spent fuel assemblies using General Electric, Morris Operation's in-pool calorimeter. Ten of the 7 x 7 BWR spent fuel assemblies were obtained from Nebraska Public Power District's Cooper Nuclear Station. The remaining BWR assembly was from Commonwealth Edison's Dresden Nuclear Power Plant. The assemblies had burnups ranging from 5.3 to 27.6 GWD/MTU and had been removed from their respective reactors for 2 or more years. The majority of the assemblies had burnups of between 20 and 28 GWD/MTU and had been out of the reactor 2 to 4 years. The assemblies represent spent fuel that has been continuously burned and fuel that has been reinserted. Comparisons of ORIGEN2 pre-calorimetry decay heat predictions with calorimeter data showed that predictions agreed with data within the precision/repeatibility of the experimental data (+-15 Watts or 5% for a 300 Watt BWR assembly). Comparisons of predicted axial gamma profiles based on core-averaged axial burnups with measured profiles showed difference. These differences may be explained by reactor operation with partially inserted control rods

  10. On the predictiveness of single-field inflationary models

    Science.gov (United States)

    Burgess, C. P.; Patil, Subodh P.; Trott, Michael

    2014-06-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.

  11. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  12. Prediction Model for Relativistic Electrons at Geostationary Orbit

    Science.gov (United States)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  13. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  14. Measurement and prediction of post-fire erosion at the hillslope scale, Colorado Front Range

    Science.gov (United States)

    Juan de Dios Benavides-Solorio; Lee H. MacDonald

    2005-01-01

    Post-fire soil erosion is of considerable concern because of the potential decline in site productivity and adverse effects on downstream resources. For the Colorado Front Range there is a paucity of post-fire erosion data and a corresponding lack of predictive models. This study measured hillslope-scale sediment production rates and site characteristics for three wild...

  15. Predictions of protein flexibility: first-order measures.

    Science.gov (United States)

    Kovacs, Julio A; Chacón, Pablo; Abagyan, Ruben

    2004-09-01

    The normal modes of a molecule are utilized, in conjunction with classical conformal vector field theory, to define a function that measures the capability of the molecule to deform at each of its residues. An efficient algorithm is presented to calculate the local chain deformability from the set of normal modes of vibration. This is done by considering each mode as an off-grid sample of a deformation vector field. Predictions of deformability are compared with experimental data in the form of dihedral angle differences between two conformations of ten kinases by using a modified correlation function. Deformability calculations correlate well with experimental results and validate the applicability of this method to protein flexibility predictions. Copyright 2004 Wiley-Liss, Inc.

  16. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  17. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  18. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  19. Predictive model for determining the quality of a call

    Science.gov (United States)

    Voznak, M.; Rozhon, J.; Partila, P.; Safarik, J.; Mikulec, M.; Mehic, M.

    2014-05-01

    In this paper the predictive model for speech quality estimation is described. This model allows its user to gain the information about the speech quality in VoIP networks without the need of performing the actual call and the consecutive time consuming sound file evaluation. This rapidly increases usability of the speech quality measurement especially in high load networks, where the actual processing of all calls is rendered difficult or even impossible. This model can reach its results that are highly conformant with the PESQ algorithm only based on the network state parameters that are easily obtainable by the commonly used software tools. Experiments were carried out to investigate whether different languages (English, Czech) have an effect on perceived voice quality for the same network conditions and the language factor was incorporated directly into the model.

  20. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  1. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  2. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. IMPORTANCE OF KINETIC MEASURES IN TRAJECTORY PREDICTION WITH OPTIMAL CONTROL

    Directory of Open Access Journals (Sweden)

    Ömer GÜNDOĞDU

    2001-02-01

    Full Text Available A two-dimensional sagittally symmetric human-body model was established to simulate an optimal trajectory for manual material handling tasks. Nonlinear control techniques and genetic algorithms were utilized in the optimizations to explore optimal lifting patterns. The simulation results were then compared with the experimental data. Since the kinetic measures such as joint reactions and moments are vital parameters in injury determination, the importance of comparing kinetic measures rather than kinematical ones was emphasized.

  4. Natural Selection at Work: An Accelerated Evolutionary Computing Approach to Predictive Model Selection

    Science.gov (United States)

    Akman, Olcay; Hallam, Joshua W.

    2010-01-01

    We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP) as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency. PMID:20661297

  5. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  6. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  7. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  8. Comparison of predictions of microsegregation in the Ni-Cr-Mo system to experimental measurements

    International Nuclear Information System (INIS)

    Cefalu, Shawn A.; Krane, Matthew J.M.

    2007-01-01

    Experimental measurements of microsegregation in six Ni-Cr-Mo alloys are compared to closed system simulations. The calculations accurately predict the segregation of molybdenum, but underpredict the primary solid fraction and tend to overpredict chromium segregation at late stages of solidification. Analysis of the data suggests that the difference between the calculations and the measurements may be caused by a delay in the nucleation of the secondary phases. Upon adjusting for this delay and using the experimentally determined partition coefficients, the model better predicts the phase fractions and segregation in the alloys

  9. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  10. A comparative study of electromagnetic compatibility (EMC) analytical predictions and measurements

    Science.gov (United States)

    Clough, I. D.; Boud, W. E.

    1981-03-01

    Predictions by the specification and EMC analysis program (SEMCAP) used in the design of communication satellites to analyze and control EMC of electronic subsystems and wiring are compared with OTS, MAROTS and Meteosat data. The SEMCAP values are also checked against measurements on an experimental model of cable-coupled interference. A simple system handbook is provided. For a configuration of generators, receptors and wires, SEMCAP agrees reasonably well with measurements. The bundle shielding effect (of wires in the bundle other than those constituting the hard wire connections studied) introduces discrepencies. If this effect is allowed for in modelling, agreement with measurements is good.

  11. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  12. Measurement of the Red Blood Cell Distribution Width Improves the Risk Prediction in Cardiac Resynchronization Therapy.

    Science.gov (United States)

    Boros, András Mihály; Perge, Péter; Jenei, Zsigmond; Karády, Júlia; Zima, Endre; Molnár, Levente; Becker, Dávid; Gellér, László; Prohászka, Zoltán; Merkely, Béla; Széplaki, Gábor

    2016-01-01

    Increases in red blood cell distribution width (RDW) and NT-proBNP (N-terminal pro-B-type natriuretic peptide) predict the mortality of chronic heart failure patients undergoing cardiac resynchronization therapy (CRT). It was hypothesized that RDW is independent of and possibly even superior to NT-proBNP from the aspect of long-term mortality prediction. The blood counts and serum NT-proBNP levels of 134 patients undergoing CRT were measured. Multivariable Cox regression models were applied and reclassification analyses were performed. After separate adjustment to the basic model of left bundle branch block, beta blocker therapy, and serum creatinine, both the RDW > 13.35% and NT-proBNP > 1975 pg/mL predicted the 5-year mortality (n = 57). In the final model including all variables, the RDW [HR = 2.49 (1.27-4.86); p = 0.008] remained a significant predictor, whereas the NT-proBNP [HR = 1.18 (0.93-3.51); p = 0.07] lost its predictive value. On addition of the RDW measurement, a 64% net reclassification improvement and a 3% integrated discrimination improvement were achieved over the NT-proBNP-adjusted basic model. Increased RDW levels accurately predict the long-term mortality of CRT patients independently of NT-proBNP. Reclassification analysis revealed that the RDW improves the risk stratification and could enhance the optimal patient selection for CRT.

  13. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  14. Predicting Biological Information Flow in a Model Oxygen Minimum Zone

    Science.gov (United States)

    Louca, S.; Hawley, A. K.; Katsev, S.; Beltran, M. T.; Bhatia, M. P.; Michiels, C.; Capelle, D.; Lavik, G.; Doebeli, M.; Crowe, S.; Hallam, S. J.

    2016-02-01

    Microbial activity drives marine biochemical fluxes and nutrient cycling at global scales. Geochemical measurements as well as molecular techniques such as metagenomics, metatranscriptomics and metaproteomics provide great insight into microbial activity. However, an integration of molecular and geochemical data into mechanistic biogeochemical models is still lacking. Recent work suggests that microbial metabolic pathways are, at the ecosystem level, strongly shaped by stoichiometric and energetic constraints. Hence, models rooted in fluxes of matter and energy may yield a holistic understanding of biogeochemistry. Furthermore, such pathway-centric models would allow a direct consolidation with meta'omic data. Here we present a pathway-centric biogeochemical model for the seasonal oxygen minimum zone in Saanich Inlet, a fjord off the coast of Vancouver Island. The model considers key dissimilatory nitrogen and sulfur fluxes, as well as the population dynamics of the genes that mediate them. By assuming a direct translation of biocatalyzed energy fluxes to biosynthesis rates, we make predictions about the distribution and activity of the corresponding genes. A comparison of the model to molecular measurements indicates that the model explains observed DNA, RNA, protein and cell depth profiles. This suggests that microbial activity in marine ecosystems such as oxygen minimum zones is well described by DNA abundance, which, in conjunction with geochemical constraints, determines pathway expression and process rates. Our work further demonstrates how meta'omic data can be mechanistically linked to environmental redox conditions and biogeochemical processes.

  15. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  16. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  17. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  18. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  19. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  20. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  1. Can Mandibular Condylar Mobility Sonography Measurements Predict Difficult Laryngoscopy?

    Science.gov (United States)

    Yao, Weidong; Zhou, Yumei; Wang, Bin; Yu, Tao; Shen, Zhongbing; Wu, Hao; Jin, Xiaoju; Li, Yuanhai

    2017-03-01

    Limited mandibular condylar mobility plays an important role in difficult laryngoscopy. Indirect assessment methods, such as mouth opening, have been proven to be useful predictors of difficult laryngoscopy. Sonography is a new direct assessment method for the limited mandibular condylar mobility. However, whether this method could be used in predicting difficult laryngoscopy still remains unknown. This study aimed to observe its ability to predict difficult laryngoscopy. Adult patients who were administered tracheal intubations for elective surgery under general anesthesia were enrolled in the study. Mandibular condylar mobility was assessed by sonography through condylar translation measurements. Beside mouth opening, other indirect variables that correlated with temporomandibular joint mobility, such as mandibular protrusion distance, upper lip bite test, and whether the condyle-tragus distance was Cormack-Lehane level 3 or 4. A total of 484 patients were prospectively included, and difficult laryngoscopy was reported in 41 patients. The condylar translation prediction criterion for difficult laryngoscopy was ≤10 mm. The condylar translation was correlated with Cormack-Lehane level (Spearman correlation coefficient, -0.46; 99% confidence interval [CI], -0.55 to -0.36) and owned the highest area under the receiver operating characteristic curve (0.93; 99% CI, 0.90 to 0.96, compared with that of the other predictors, P < .001) with difficult laryngoscopy. The condylar translation ≤10 mm was with a considerable κ value (κ = 0.52; 99% CI, 0.37 to 0.67) to difficult laryngoscopy and proved to be an independent predictor by a multivariate logistic regression. Compared with indirect assessments, such as mouth opening and other parameters, mandibular condylar mobility, as assessed directly using sonography, was correlated with difficult laryngoscopy and demonstrated an independent and notably predictive property.

  2. Our calibrated model has poor predictive value: An example from the petroleum industry

    International Nuclear Information System (INIS)

    Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.

    2006-01-01

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not

  3. An operational phenological model for numerical pollen prediction

    Science.gov (United States)

    Scheifinger, Helfried

    2010-05-01

    The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.

  4. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  5. Predictive power of the severity measure of attachment loss for periodontal care need.

    Science.gov (United States)

    Liu, Honghu; Marcus, Marvin; Maida, Carl A; Wang, Yan; Shen, Jie; Spolsky, Vladimir W

    2013-10-01

    The prevalence of periodontal diseases is high, and >15% of adults have severe gum disease. Clinical attachment loss (AL) is one of the most important measures for periodontal disease severity. With AL, one could measure the worst scenario, the average, or the cumulative sum of AL among all teeth. The objective of this study is to evaluate which of the 15 measures of periodontal problems (e.g., maximum, mean, and cumulative AL) best predict the need for periodontal treatment. Using detailed periodontal data obtained through clinical examination from the National Health and Nutrition Examination Survey 1999 to 2002, weighted logistic regression was used to model the periodontal treatment need of 15 different periodontal disease measures. The outcome measure is the clinically determined periodontal need. After adjustment for the covariates of age, sex, ethnicity, education, smoking status, and diabetes, the three most predictive measures were identified as: 1) the sum of the maximum mid-buccal (B) and mesio-buccal (MB) measures, which reflects the worst case of both B and MB measures; 2) the sum of the maximum MB measure or the worst case of the MB measure; and 3) the sum of all B and MB measures, or the cumulative AL measures. Cumulative periodontal morbidity, particularly the worst case of B and MB measures, has the strongest impact on the need for periodontal care. All the demographic variables and covariates follow the classic pattern of association with periodontal disease.

  6. Predictions of the most minimal see-saw model

    CERN Document Server

    Raidal, Martti

    2003-01-01

    We derive the most minimal see-saw texture from an extra-dimensional dynamics. If LMA is the solution to the solar neutrino problem, it predicts $\\theta_{13} = 0.07\\pm0.02$ and $m_{ee} = 2.5\\pm0.7 \\meV.$ Assuming thermal leptogenesis, the sign of the CP-phase measurable in neutrino oscillations, together with the sign of baryon asymmetry, determines the order of heavy neutrino masses. Unless heavy neutrinos are almost degenerate, successful leptogenesis fixes the lightest mass. Depending on the sign of the neutrino CP-phase, the supersymmetric version of the model with universal soft terms at high scale predicts BR($\\mu\\to e \\gamma$) or BR($\\tau\\to \\mu \\gamma$), and gives a lower bound on the other process.

  7. Nonlinear Model Predictive Control for Oil Reservoirs Management

    DEFF Research Database (Denmark)

    Capolei, Andrea

    . The controller consists of -A model based optimizer for maximizing some predicted financial measure of the reservoir (e.g. the net present value). -A parameter and state estimator. -Use of the moving horizon principle for data assimilation and implementation of the computed control input. The optimizer uses...... Optimization has been suggested to compensate for inherent geological uncertainties in an oil field. In robust optimization of an oil reservoir, the water injection and production borehole pressures are computed such that the predicted net present value of an ensemble of permeability field realizations...... equivalent strategy is not justified for the particular case studied in this paper. The third contribution of this thesis is a mean-variance method for risk mitigation in production optimization of oil reservoirs. We introduce a return-risk bicriterion objective function for the profit-risk tradeoff...

  8. Inverse modeling with RZWQM2 to predict water quality

    Science.gov (United States)

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  9. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  10. Reverse engineering systems models of regulation: discovery, prediction and mechanisms.

    Science.gov (United States)

    Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S

    2012-08-01

    Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. The Measurement and Prediction of Combustible Properties of Dimethylacetamide (DMAc)

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Dong-Myeong [Semyung University, Jecheon (Korea, Republic of)

    2015-10-15

    The usage of the correct combustion characteristic of the treated substance for the safety of the process is critical. For the safe handling of dimethylacetamide (DMAc) being used in various ways in the chemical industry, the flash point and the autoignition temperature (AIT) of DMAc was experimented. And, the lower explosion limit of DMAc was calculated by using the lower flash point obtained in the experiment. The flash points of DMAc by using the Setaflash and Pensky-Martens closed-cup testers measured 61 .deg. C and 65 .deg. C, respectively. The flash points of DMAc by using the Tag and Cleveland automatic open cup testers are measured 68 .deg. C and 71 .deg. C. The AIT of DMAc by ASTM 659E tester was measured as 347 .deg. C. The lower explosion limit by the measured flash point 61 .deg. C was calculated as 1.52 vol%. It was possible to predict lower explosion limit by using the experimental flash point or flash point in the literature.

  12. Soundscape descriptors and a conceptual framework for developing predictive soundscape models

    OpenAIRE

    Aletta, F.; Kang, J.; Axelsson, O.

    2016-01-01

    Soundscape exists through human perception of the acoustic environment. This paper investigates how soundscape currently is assessed and measured. It reviews and analyzes the main soundscape descriptors in the soundscape literature, and provides a conceptual framework for developing predictive models in soundscape studies. A predictive soundscape model provides a means of predicting the value of a soundscape descriptor, and the blueprint for how to design soundscape. It is the key for impleme...

  13. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  14. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  15. Integrating a human thermoregulatory model with a clothing model to predict core and skin temperatures.

    Science.gov (United States)

    Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen

    2017-05-01

    This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Chemical Thermodynamics of Aqueous Atmospheric Aerosols: Modeling and Microfluidic Measurements

    Science.gov (United States)

    Nandy, L.; Dutcher, C. S.

    2017-12-01

    Accurate predictions of gas-liquid-solid equilibrium phase partitioning of atmospheric aerosols by thermodynamic modeling and measurements is critical for determining particle composition and internal structure at conditions relevant to the atmosphere. Organic acids that originate from biomass burning, and direct biogenic emission make up a significant fraction of the organic mass in atmospheric aerosol particles. In addition, inorganic compounds like ammonium sulfate and sea salt also exist in atmospheric aerosols, that results in a mixture of single, double or triple charged ions, and non-dissociated and partially dissociated organic acids. Statistical mechanics based on a multilayer adsorption isotherm model can be applied to these complex aqueous environments for predictions of thermodynamic properties. In this work, thermodynamic analytic predictive models are developed for multicomponent aqueous solutions (consisting of partially dissociating organic and inorganic acids, fully dissociating symmetric and asymmetric electrolytes, and neutral organic compounds) over the entire relative humidity range, that represent a significant advancement towards a fully predictive model. The model is also developed at varied temperatures for electrolytes and organic compounds the data for which are available at different temperatures. In addition to the modeling approach, water loss of multicomponent aerosol particles is measured by microfluidic experiments to parameterize and validate the model. In the experimental microfluidic measurements, atmospheric aerosol droplet chemical mimics (organic acids and secondary organic aerosol (SOA) samples) are generated in microfluidic channels and stored and imaged in passive traps until dehydration to study the influence of relative humidity and water loss on phase behavior.

  17. Measurement and modeling of advanced coal conversion processes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G.; Smoot, L.D.; Brewster, B.S. (Advanced Fuel Research, Inc., East Hartford, CT (United States) Brigham Young Univ., Provo, UT (United States))

    1991-01-01

    The overall objective of this program is the development of predictive capability for the design, scale up, simulation, control and feedstock evaluation in advanced coal conversion devices. This program will merge significant advances made in measuring and quantitatively describing the mechanisms in coal conversion behavior. Comprehensive computer codes for mechanistic modeling of entrained-bed gasification. Additional capabilities in predicting pollutant formation will be implemented and the technology will be expanded to fixed-bed reactors.

  18. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  19. A measurement-based method for predicting margins and uncertainties for unprotected accidents in the Integral Fast Reactor concept

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1990-01-01

    A measurement-based method for predicting the response of an LMR core to unprotected accidents has been developed. The method processes plant measurements taken at normal operation to generate a stochastic model for the core dynamics. This model can be used to predict three sigma confidence intervals for the core temperature and power response. Preliminary numerical simulations performed for EBR-2 appear promising. 6 refs., 2 figs

  20. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences

    Energy Technology Data Exchange (ETDEWEB)

    Kendall, G.M. [University of Oxford, Cancer Epidemiology Unit, Oxford (United Kingdom); Wakeford, R. [University of Manchester, Centre for Occupational and Environmental Health, Institute of Population Health, Manchester (United Kingdom); Athanson, M. [University of Oxford, Bodleian Library, Oxford (United Kingdom); Vincent, T.J. [University of Oxford, Childhood Cancer Research Group, Oxford (United Kingdom); Carter, E.J. [University of Worcester, Earth Heritage Trust, Geological Records Centre, Henwick Grove, Worcester (United Kingdom); McColl, N.P. [Public Health England, Centre for Radiation, Chemical and Environmental Hazards, Chilton, Didcot, Oxon (United Kingdom); Little, M.P. [National Cancer Institute, DHHS, NIH, Radiation Epidemiology Branch, Division of Cancer Epidemiology and Genetics, Bethesda, MD (United States)

    2016-03-15

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matern correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matern model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matern model. (orig.)

  1. Predictive Measurement of the Structure of Land Use in an Urban Agglomeration Space

    Directory of Open Access Journals (Sweden)

    Fei Liu

    2017-12-01

    Full Text Available The scientific measurement of land use in space is an essential task in urban agglomeration studies, and the fractal feature is one of the most powerful tools for describing the phenomenon of space. However, previous research on the fractal feature of land use has mostly been conducted in urban space, and examines the fractal feature of different land use types, respectively; thus, the measurement of the relationship between different land use types was not realized. Meanwhile, previous prediction methods used for spatial land use mostly relied on subjective abstraction of the evolution, theoretically, regardless of whether they were calibrated, so that complete coverage of all the mechanisms could not be guaranteed. Based on this, here, we treat the land use structure in urban agglomeration space as the research object, and attempt to establish a fractal measure method for the relationship between different land use types in the space of urban agglomeration. At the same time, we use the allometric relationship between “entirety” and “local” to establish an objective forecast model for the land use structure in urban agglomeration space based on gray prediction theory, to achieve a predictive measurement of the structure of land use in urban agglomeration space. Finally, this study applied the methods on the Beijing–Tianjin–Hebei urban agglomeration to analyze the evolution of the stability of the structure of land use and achieve predictive measurement of the structure of land use. The results of the case study show that the methods proposed in this study can obtain the measurement of the relationship between different land use types and the land use prediction that does not depend on the subjective exploration of the evolution law. Compared with the measurement methods that analyzed the fractal feature of different land types, respectively, and the prediction methods that rely on subjective choice, the methods presented in this

  2. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  3. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  4. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  5. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  6. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  7. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.

    1994-01-01

    sludge. Phenomena as free and hindered settling and the Bingham plastic characteristic of activated sludge suspensions are included in the numerical model. Further characterisation and test tank experiments are described. The characterisation experiments were designed to measure calibration parameters...... and for comparing measured and calculated result. The numerical model could, fairly accuratly, predict the measured results and both the measured and the calculated results showed a flow field pattern identical to flow fields in full-scale secondary setling tanks. A specific calibration of the Bingham plastic...

  8. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  9. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    Science.gov (United States)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  10. Measurements and prediction of inhaled air quality with personalized ventilation

    DEFF Research Database (Denmark)

    Cermak, Radim; Majer, M.; Melikov, Arsen Krikor

    2002-01-01

    This paper examines the performance of five different air terminal devices for personalized ventilation in relation to the quality of air inhaled by a breathing thermal manikin in a climate chamber. The personalized air was supplied either isothermally or non-isothermally (6 deg.C cooler than...... the room air) at flow rates ranging from less than 5 L/s up to 23 L/s. The air quality assessment was based on temperature measurements of the inhaled air and on the portion of the personalized air inhaled. The percentage of dissatisfied with the air quality was predicted. The results suggest...... that regardless of the temperature combinations, personalized ventilation may decrease significantly the number of occupants dissatisfied with the air quality. Under non-isothermal conditions the percentage of dissatisfied may decrease up to 4 times....

  11. Spleen stiffness measurement can predict clinical complications in compensated HCV-related cirrhosis: a prospective study.

    Science.gov (United States)

    Colecchia, Antonio; Colli, Agostino; Casazza, Giovanni; Mandolesi, Daniele; Schiumerini, Ramona; Reggiani, Letizia Bacchi; Marasco, Giovanni; Taddia, Martina; Lisotti, Andrea; Mazzella, Giuseppe; Di Biase, Anna Rita; Golfieri, Rita; Pinzani, Massimo; Festi, Davide

    2014-06-01

    Hepatic venous pressure gradient (HVPG) measurement represents the best predictor of clinical decompensation (CD) in cirrhotic patients. Recently data show that measurement of spleen stiffness (SS) has an excellent correlation with HVPG levels. Aim of the present prospective study was to assess SS predictive value for CD compared to HVPG, liver stiffness (LS), and other non-invasive tests for portal hypertension in a cohort of patients with HCV-related compensated cirrhosis. From an initial cohort of 124 patients, 92 underwent baseline LS, SS, HVPG measurements and upper gastrointestinal endoscopy at enrolment and then followed-up for 2 years or until the occurrence of the first CD. Univariate and multivariate logistic regression models were used for determining judgement criteria associated parameters. Accuracy of predictive factors was evaluated using c statistic. The final model was internally validated using the bootstrap method. During follow-up, 30 out 92 (32.6%) patients developed CD. At univariate analysis varices at enrolment, all non-invasive parameters, HVPG, and model for end-stage liver disease (MELD) resulted clinical predictors of CD. At multivariate analysis only SS (p=0.0001) and MELD (p=0.014) resulted as predictive factors. A decision algorithm based on the results of a predictive model was proposed to detect patients with low risk of decompensation. This study shows that in compensated cirrhotic patients a SS and MELD predictive model represents an accurate predictor of CD with accuracy at least equivalent to that of HVPG. If confirmed by further studies, SS and MELD could represent valid alternatives to HVPG as prognostic indicator of CD in HCV-related cirrhosis. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  12. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  13. Predicting the effects of measures to reduce eutrophication in surface water in rural areas - a case study

    NARCIS (Netherlands)

    Hendriks, R.F.A.; Kolk, van der J.W.H.

    1995-01-01

    The effectiveness of measures to reduce nutrient concentrations in surface water was predicted by a combination of a nutrient leaching model for groundwater and a nutrient simulation model for surface water. Scenarios were formulated based on several measures. Different combinations of drainage

  14. Hierarchical anatomical brain networks for MCI prediction: revisiting volumetric measures.

    Directory of Open Access Journals (Sweden)

    Luping Zhou

    Full Text Available Owning to its clinical accessibility, T1-weighted MRI (Magnetic Resonance Imaging has been extensively studied in the past decades for prediction of Alzheimer's disease (AD and mild cognitive impairment (MCI. The volumes of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF are the most commonly used measurements, resulting in many successful applications. It has been widely observed that disease-induced structural changes may not occur at isolated spots, but in several inter-related regions. Therefore, for better characterization of brain pathology, we propose in this paper a means to extract inter-regional correlation based features from local volumetric measurements. Specifically, our approach involves constructing an anatomical brain network for each subject, with each node representing a Region of Interest (ROI and each edge representing Pearson correlation of tissue volumetric measurements between ROI pairs. As second order volumetric measurements, network features are more descriptive but also more sensitive to noise. To overcome this limitation, a hierarchy of ROIs is used to suppress noise at different scales. Pairwise interactions are considered not only for ROIs with the same scale in the same layer of the hierarchy, but also for ROIs across different scales in different layers. To address the high dimensionality problem resulting from the large number of network features, a supervised dimensionality reduction method is further employed to embed a selected subset of features into a low dimensional feature space, while at the same time preserving discriminative information. We demonstrate with experimental results the efficacy of this embedding strategy in comparison with some other commonly used approaches. In addition, although the proposed method can be easily generalized to incorporate other metrics of regional similarities, the benefits of using Pearson correlation in our application are reinforced by the experimental

  15. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  16. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  17. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  18. Modeling the Ionosphere with GPS and Rotation Measure Observations

    Science.gov (United States)

    Malins, J. B.; Taylor, G. B.; White, S. M.; Dowell, J.

    2017-12-01

    Advances in digital processing have created new tools for looking at and examining the ionosphere. We have combined data from dual frequency GPSs, digital ionosondes and observations from The Long Wavelength Array (LWA), a 256 dipole low frequency radio telescope situated in central New Mexico in order to examine ionospheric profiles. By studying polarized pulsars, the LWA is able to very accurately determine the Faraday rotation caused by the ionosphere. By combining this data with the international geomagnetic reference field, the LWA can evaluate ionospheric profiles and how well they predict the actual Faraday rotation. Dual frequency GPS measurements of total electron content, as well as measurements from digisonde data were used to model the ionosphere, and to predict the Faraday rotation to with in 0.1 rad/m2. Additionally, it was discovered that the predicted topside profile of the digisonde data did not accurate predict faraday rotation measurements, suggesting a need to reexamine the methods for creating the topside predicted profile. I will discuss the methods used to measure rotation measure and ionosphere profiles as well as discuss possible corrections to the topside model.

  19. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  20. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  1. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  2. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  3. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  4. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  5. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  6. Prediction of insulin resistance with anthropometric measures: lessons from a large adolescent population

    Directory of Open Access Journals (Sweden)

    Wedin WK

    2012-07-01

    Full Text Available William K Wedin,1 Lizmer Diaz-Gimenez,1 Antonio J Convit1,21Department of Psychiatry, NYU School of Medicine, New York, NY, USA; 2Nathan Kline Institute, Orangeburg, NY, USAObjective: The aim of this study was to describe the minimum number of anthropometric measures that will optimally predict insulin resistance (IR and to characterize the utility of these measures among obese and nonobese adolescents.Research design and methods: Six anthropometric measures (selected from three categories: central adiposity, weight, and body composition were measured from 1298 adolescents attending two New York City public high schools. Body composition was determined by bioelectric impedance analysis (BIA. The homeostatic model assessment of IR (HOMA-IR, based on fasting glucose and insulin concentrations, was used to estimate IR. Stepwise linear regression analyses were performed to predict HOMA-IR based on the six selected measures, while controlling for age.Results: The stepwise regression retained both waist circumference (WC and percentage of body fat (BF%. Notably, BMI was not retained. WC was a stronger predictor of HOMA-IR than BMI was. A regression model using solely WC performed best among the obese II group, while a model using solely BF% performed best among the lean group. Receiver operator characteristic curves showed the WC and BF% model to be more sensitive in detecting IR than BMI, but with less specificity.Conclusion: WC combined with BF% was the best predictor of HOMA-IR. This finding can be attributed partly to the ability of BF% to model HOMA-IR among leaner participants and to the ability of WC to model HOMA-IR among participants who are more obese. BMI was comparatively weak in predicting IR, suggesting that assessments that are more comprehensive and include body composition analysis could increase detection of IR during adolescence, especially among those who are lean, yet insulin-resistant.Keywords: BMI, bioelectrical impedance

  7. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  8. Models that predict standing crop of stream fish from habitat variables: 1950-85.

    Science.gov (United States)

    K.D. Fausch; C.L. Hawkes; M.G. Parsons

    1988-01-01

    We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...

  9. Electrochemical sensor for predicting transformer overload by phenol measurement

    Energy Technology Data Exchange (ETDEWEB)

    Bosworth, Timothy; Setford, Steven; Saini, Selwayan [Cranfield Centre for Analytical Science, Cranfield University, Silsoe, Beds MK45 4DT (United Kingdom); Heywood, Richard [National Grid Company Plc, Kelvin Avenue, Leatherhead, Surrey KT22 7ST (United Kingdom)

    2003-03-10

    Transformer overload is a significant problem to the power transmission industry, with severe safety and cost implications. Overload may be predicted by measuring phenol levels in the transformer-insulating oil, arising from the thermolytic degradation of phenol-formaldehyde resins. The development of two polyphenol oxidase (PPO) sensors, based on monitoring the enzymatic consumption of oxygen using an oxygen electrode, or reduction of enzymatically generated o-quinone at a screen-printed electrode (SPE), for the measurement of phenol in transformer oil is reported. Ex-service oils were prepared either by extraction into aqueous electrolyte-buffer, or by direct dilution in propan-2-ol, the latter method being more amenable to simple at-line operation. The oxygen electrode, with a sensitivity of 2.87 nA {mu}g{sup -1} ml{sup -1}, RSD of 7.0-19.9% and accuracy of {+-}8.3% versus the industry standard International Electrotechnical Commission (IEC) method, proved superior to the SPE (sensitivity: 3.02 nA {mu}g{sup -1} ml{sup -1}; RSD: 8.9-18.3%; accuracy: {+-}7.9%) and was considerably more accurate at low phenol concentrations. However, the SPE approach is more amenable to field-based usage for reasons of device simplicity. The method has potential as a rapid and simple screening tool for the at-site monitoring of phenol in transformer oils, thereby reducing incidences of transformer failure.

  10. Measuring psychosocial variables that predict older persons' oral health behaviour.

    Science.gov (United States)

    Kiyak, H A

    1996-12-01

    The importance of recognising psychosocial characteristics of older people that influence their oral health behaviours and the potential success of dental procedures is discussed. Three variables and instruments developed and tested by the author and colleagues are presented. A measure of perceived importance of oral health behaviours has been found to be a significant predictor of dental service utilization in three studies. Self-efficacy regarding oral health has been found to be lower than self-efficacy regarding general health and medication use among older adults, especially among non-Western ethnic minorities. The significance of self-efficacy for predicting changes in caries and periodontal disease is described. Finally, a measure of expectations regarding specific dental procedures has been used with older people undergoing implant therapy. Studies with this instrument reveal that patients have concerns about the procedure far different than those focused on by dental providers. All three instruments can be used in clinical practice as a means of understanding patients' values, perceived oral health abilities, and expectations from dental care. These instruments can enhance dentist-patient rapport and improve the chances of successful dental outcomes for older patients.

  11. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  12. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  13. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  14. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  15. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  16. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  17. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  18. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  19. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  20. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  1. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  2. Advancing viral RNA structure prediction: measuring the thermodynamics of pyrimidine-rich internal loops.

    Science.gov (United States)

    Phan, Andy; Mailey, Katherine; Saeki, Jessica; Gu, Xiaobo; Schroeder, Susan J

    2017-05-01

    Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased toward small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with >6 nucleotides (nt) that occur frequently in viral RNA. This article presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Uracil and protonated cytosine base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with >6 nt are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2 × 2 have been measured. These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites. © 2017 Phan et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  3. Estimation of partial least squares regression prediction uncertainty when the reference values carry a sizeable measurement error

    NARCIS (Netherlands)

    Fernandez Pierna, J.A.; Lin, L.; Wahl, F.; Faber, N.M.; Massart, D.L.

    2003-01-01

    The prediction uncertainty is studied when using a multivariate partial least squares regression (PLSR) model constructed with reference values that contain a sizeable measurement error. Several approximate expressions for calculating a sample-specific standard error of prediction have been proposed

  4. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  5. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  6. Hybrid Wavelet-Postfix-GP Model for Rainfall Prediction of Anand Region of India

    Directory of Open Access Journals (Sweden)

    Vipul K. Dabhi

    2014-01-01

    Full Text Available An accurate prediction of rainfall is crucial for national economy and management of water resources. The variability of rainfall in both time and space makes the rainfall prediction a challenging task. The present work investigates the applicability of a hybrid wavelet-postfix-GP model for daily rainfall prediction of Anand region using meteorological variables. The wavelet analysis is used as a data preprocessing technique to remove the stochastic (noise component from the original time series of each meteorological variable. The Postfix-GP, a GP variant, and ANN are then employed to develop models for rainfall using newly generated subseries of meteorological variables. The developed models are then used for rainfall prediction. The out-of-sample prediction performance of Postfix-GP and ANN models is compared using statistical measures. The results are comparable and suggest that Postfix-GP could be explored as an alternative tool for rainfall prediction.

  7. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  8. Wind Speed Prediction Using a Univariate ARIMA Model and a Multivariate NARX Model

    Directory of Open Access Journals (Sweden)

    Erasmo Cadenas

    2016-02-01

    Full Text Available Two on step ahead wind speed forecasting models were compared. A univariate model was developed using a linear autoregressive integrated moving average (ARIMA. This method’s performance is well studied for a large number of prediction problems. The other is a multivariate model developed using a nonlinear autoregressive exogenous artificial neural network (NARX. This uses the variables: barometric pressure, air temperature, wind direction and solar radiation or relative humidity, as well as delayed wind speed. Both models were developed from two databases from two sites: an hourly average measurements database from La Mata, Oaxaca, Mexico, and a ten minute average measurements database from Metepec, Hidalgo, Mexico. The main objective was to compare the impact of the various meteorological variables on the performance of the multivariate model of wind speed prediction with respect to the high performance univariate linear model. The NARX model gave better results with improvements on the ARIMA model of between 5.5% and 10. 6% for the hourly database and of between 2.3% and 12.8% for the ten minute database for mean absolute error and mean squared error, respectively.

  9. CT Measured Psoas Density Predicts Outcomes After Enterocutaneous Fistula Repair

    Science.gov (United States)

    Lo, Wilson D.; Evans, David C.; Yoo, Taehwan

    2018-01-01

    Background Low muscle mass and quality are associated with poor surgical outcomes. We evaluated CT measured psoas muscle density as a marker of muscle quality and physiologic reserve, and hypothesized that it would predict outcomes after enterocutaneous fistula repair (ECF). Methods We conducted a retrospective cohort study of patients 18 – 90 years old with ECF failing non-operative management requiring elective operative repair at Ohio State University from 2005 – 2016 that received a pre-operative abdomen/pelvis CT with intravenous contrast within 3 months of their operation. Psoas Hounsfield Unit average calculation (HUAC) were measured at the L3 level. 1 year leak rate, 90 day, 1 year, and 3 year mortality, complication risk, length of stay, dependent discharge, and 30 day readmission were compared to HUAC. Results 100 patients met inclusion criteria. Patients were stratified into interquartile (IQR) ranges based on HUAC. The lowest HUAC IQR was our low muscle quality (LMQ) cutoff, and was associated with 1 year leak (OR 3.50, p < 0.01), 1 year (OR 2.95, p < 0.04) and 3 year mortality (OR 3.76, p < 0.01), complication risk (OR 14.61, p < 0.01), and dependent discharge (OR 4.07, p < 0.01) compared to non-LMQ patients. Conclusions Psoas muscle density is a significant predictor of poor outcomes in ECF repair. This readily available measure of physiologic reserve can identify patients with ECF on pre-operative evaluation that have significantly increased risk that may benefit from additional interventions and recovery time to mitigate risk before operative repair. PMID:29505144

  10. Predicting biological system objectives de novo from internal state measurements

    Directory of Open Access Journals (Sweden)

    Maranas Costas D

    2008-01-01

    Full Text Available Abstract Background Optimization theory has been applied to complex biological systems to interrogate network properties and develop and refine metabolic engineering strategies. For example, methods are emerging to engineer cells to optimally produce byproducts of commercial value, such as bioethanol, as well as molecular compounds for disease therapy. Flux balance analysis (FBA is an optimization framework that aids in this interrogation by generating predictions of optimal flux distributions in cellular networks. Critical features of FBA are the definition of a biologically relevant objective function (e.g., maximizing the rate of synthesis of biomass, a unit of measurement of cellular growth and the subsequent application of linear programming (LP to identify fluxes through a reaction network. Despite the success of FBA, a central remaining challenge is the definition of a network objective with biological meaning. Results We present a novel method called Biological Objective Solution Search (BOSS for the inference of an objective function of a biological system from its underlying network stoichiometry as well as experimentally-measured state variables. Specifically, BOSS identifies a system objective by defining a putative stoichiometric "objective reaction," adding this reaction to the existing set of stoichiometric constraints arising from known interactions within a network, and maximizing the putative objective reaction via LP, all the while minimizing the difference between the resultant in silico flux distribution and available experimental (e.g., isotopomer flux data. This new approach allows for discovery of objectives with previously unknown stoichiometry, thus extending the biological relevance from earlier methods. We verify our approach on the well-characterized central metabolic network of Saccharomyces cerevisiae. Conclusion We illustrate how BOSS offers insight into the functional organization of biochemical networks

  11. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  12. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  13. Measuring and prediction of global solar ultraviolet radiation (0295-0385 μ m) under clear and cloudless skies

    International Nuclear Information System (INIS)

    Wright, Jaime

    2008-01-01

    Values of global solar ultraviolet radiation were measured with an ultraviolet radiometer and also predicted with a atmospheric spectral model. The values obtained with the atmospheric spectral model, based physically, were analyzed and compared with experimental values measured in situ. Measurements were performed for different zenith angles in conditions of clear skies in Heredia, Costa Rica. The necessary input data include latitude, altitude, surface albedo, Earth-Sun distance, as well as atmospheric characteristics: atmospheric turbidity, precipitable water and atmospheric ozone. The comparison between measured and predicted values have been successful. (author) [es

  14. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  15. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires

    OpenAIRE

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas

    2016-01-01

    OBJECTIVES: We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys.METHODS: Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 d...

  16. Time dependent deformation in prestressed concrete girder: Measurement and prediction

    Science.gov (United States)

    Sokal, Y. J.; Tyrer, P.

    1981-11-01

    Prestressed concrete girders which are intended for composite construction in bridges and other similar structures are often stored unloaded for some time before being placed in their final positions where top deck is being poured over. During that free storage the girders are subjected to creep and shrinkage which manifests itself through increased upward deformation usually defined as camber. The analytical estimation of this deformation is important as it controls the minimum thickness of the top deck. An attempt was made to correlate on site measurements with continuous computer modeling of the time-dependent behavior using data from recently adopted international standard for concrete structures.

  17. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  18. Comparison of Predictive Models for the Early Diagnosis of Diabetes.

    Science.gov (United States)

    Jahani, Meysam; Mahdavi, Mahdi

    2016-04-01

    This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes.

  19. Can Fetal Limb Soft Tissue Measurements in the Third Trimester Predict Neonatal Adiposity?

    Science.gov (United States)

    Moore, Gaea S; Allshouse, Amanda A; Fisher, Barbra M; Kahn, Bronwen F; Hernandez, Teri L; Reece, Melanie S; Reynolds, Regina M; Lee, Wesley; Barbour, Linda A; Galan, Henry L

    2016-09-01

    Neonatal adiposity is associated with chronic metabolic sequelae such as diabetes and obesity. Identifying fetuses at risk for excess neonatal body fat may lead to research aimed at limiting nutritional excess in the prenatal period. We sought to determine whether fetal arm and leg soft tissue measurements at 28 weeks' gestation were predictive of neonatal percent body fat METHODS : In this prospective observational cohort study of singleton term pregnancies, we performed sonography at 28 and 36 weeks' gestation, including soft tissue measurements of the fetal arm and thigh (fractional limb volume and cross-sectional area). We estimated the neonatal body composition (percent body fat) using anthropometric measurements and air displacement plethysmography. We estimated Spearman correlations between sonographic findings and percent body fat and performed modeling to predict neonatal percent body fat using maternal characteristics and sonographic findings. Our analysis of 44 women yielded a mean maternal age of 30 years, body mass index of 26 kg/m(2), and birth weight of 3382 g. Mean neonatal percent body fat was 8.1% by skin folds at birth and 12.2% by air displacement plethysmography 2 weeks after birth. Fractional thigh volume measurements at 28 weeks yielded the most accurate model for predicting neonatal percent body fat (R(2) = 0.697; P = .001), outperforming models that used abdominal circumference (R(2)= 0.516) and estimated fetal weight (R(2)= 0.489). Soft tissue measurements of the fetal thigh at 28 weeks correlated better with neonatal percent body fat than currently used sonographic measurements. After validation in a larger cohort, our models may be useful for prenatal intervention strategies aimed at the prevention of excess fetal fat accretion and, potentially, optimization of long-term metabolic health.

  20. Individualized predictions of time to menopause using multiple measurements of antimüllerian hormone.

    Science.gov (United States)

    Gohari, Mahmood Reza; Ramezani Tehrani, Fahime; Chenouri, Shojaeddin; Solaymani-Dodaran, Masoud; Azizi, Fereidoun

    2016-08-01

    The ability of antimüllerian hormone (AMH) to predict age at menopause has been reported in several studies, and a decrease in AMH level has been found to increase the probability of menopause. The rate of decline varies among women, and there is also a variability of decline between women's cycles. As a result, individualized evaluation is required to accurately predict time of menopause. To this end, we have used the AMH trajectories of individual women to predict each one's age at menopause. From a cohort study, 266 women (ages 20-50 y) who had regular and predictable menstrual cycles at the initiation of the study were randomly selected from among 1,265 women for multiple AMH measurements. Participants were visited at approximately 3-year intervals and followed for an average of 6.5 years. Individual likelihood of menopause was predicted by fitting the shared random-effects joint model to the baseline covariates and the specific AMH trajectory of each woman. In total, 23.7% of the women reached menopause during the follow-up period. The estimated mean (SD) AMH concentration at the time of menopause was 0.05 ng/mL (0.06 ng/mL), compared with 1.36 ng/mL (1.85 ng/mL) for those with a regular menstrual cycle at their last assessment. The decline rate in the AMH level varied among age groups, and age was a significant prognostic factor for AMH level (P menopause. Individualized prediction of time to menopause was obtained from the fitted model. Longitudinal measurements of AMH will enable physicians to individualize the prediction of menopause, thereby facilitating counseling on the timing of childbearing or medical management of health issues associated with menopause.

  1. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  2. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  3. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  4. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  5. Prediction models for transfer of arsenic from soil to corn grain (Zea mays L.).

    Science.gov (United States)

    Yang, Hua; Li, Zhaojun; Long, Jian; Liang, Yongchao; Xue, Jianming; Davis, Murray; He, Wenxiang

    2016-04-01

    In this study, the transfer of arsenic (As) from soil to corn grain was investigated in 18 soils collected from throughout China. The soils were treated with three concentrations of As and the transfer characteristics were investigated in the corn grain cultivar Zhengdan 958 in a greenhouse experiment. Through stepwise multiple-linear regression analysis, prediction models were developed combining the As bioconcentration factor (BCF) of Zhengdan 958 and soil pH, organic matter (OM) content, and cation exchange capacity (CEC). The possibility of applying the Zhengdan 958 model to other cultivars was tested through a cross-cultivar extrapolation approach. The results showed that the As concentration in corn grain was positively correlated with soil pH. When the prediction model was applied to non-model cultivars, the ratio ranges between the predicted and measured BCF values were within a twofold interval between predicted and measured values. The ratios were close to a 1:1 relationship between predicted and measured values. It was also found that the prediction model (Log [BCF]=0.064 pH-2.297) could effectively reduce the measured BCF variability for all non-model corn cultivars. The novel model is firstly developed for As concentration in crop grain from soil, which will be very useful for understanding the As risk in soil environment.

  6. Predicting People's Environmental Behaviour: Theory of Planned Behaviour and Model of Responsible Environmental Behaviour

    Science.gov (United States)

    Chao, Yu-Long

    2012-01-01

    Using different measures of self-reported and other-reported environmental behaviour (EB), two important theoretical models explaining EB--Hines, Hungerford and Tomera's model of responsible environmental behaviour (REB) and Ajzen's theory of planned behaviour (TPB)--were compared regarding the fit between model and data, predictive ability,…

  7. Prediction of human core body temperature using non-invasive measurement methods

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  8. Prediction of human core body temperature using non-invasive measurement methods.

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  9. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  10. Comparison of measured and predicted performance of a SIS waveguide mixer at 345 GHz

    Science.gov (United States)

    Honingh, C. E.; Delange, G.; Dierichs, M. M. T. M.; Schaeffer, H. H. A.; Wezelman, J.; Vandekuur, J.; Degraauw, T.; Klapwijk, T. M.

    1992-01-01

    The measured gain and noise of a SIS waveguide mixer at 345 GHz have been compared with theoretical values, calculated from the quantum mixer theory using a three port model. As a mixing element, we use a series array of two Nb-Al2O3-Nb SIS junctions. The area of each junction is 0.8 sq microns and the normal state resistance is 52 omega. The embedding impedance of the mixer has been determined from the pumped DC-IV curves of the junction and is compared to results from scale model measurements (105 x). Good agreement was obtained. The measured mixer gain, however, is a factor of 0.45 plus or minus 0.5 lower than the theoretical predicted gain. The measured mixer noise temperature is a factor of 4-5 higher than the calculated one. These discrepancies are independent on pump power and are valid for a broad range of tuning conditions.

  11. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory

    Science.gov (United States)

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip

    2009-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  12. Model predictive control of a crude oil distillation column

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    1999-04-01

    Full Text Available The project of designing and implementing model based predictive control on the vacuum distillation column at the Nynäshamn Refinery of Nynäs AB is described in this paper. The paper describes in detail the modeling for the model based control, covers the controller implementation, and documents the benefits gained from the model based controller.

  13. Measurement control program at model facility

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    A measurement control program for the model plant is described. The discussion includes the technical basis for such a program, the application of measurement control principles to each measurement, and the use of special experiments to estimate measurement error parameters for difficult-to-measure materials. The discussion also describes the statistical aspects of the program, and the documentation procedures used to record, maintain, and process the basic data

  14. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  15. Empirical modelling to predict the refractive index of human blood

    Science.gov (United States)

    Yahya, M.; Saghir, M. Z.

    2016-02-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  16. Empirical modelling to predict the refractive index of human blood.

    Science.gov (United States)

    Yahya, M; Saghir, M Z

    2016-02-21

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient's condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  17. Empirical modelling to predict the refractive index of human blood

    International Nuclear Information System (INIS)

    Yahya, M; Saghir, M Z

    2016-01-01

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient’s condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy. (paper)

  18. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  19. An employer brand predictive model for talent attraction and retention

    Directory of Open Access Journals (Sweden)

    Annelize Botha

    2011-11-01

    Full Text Available Orientation: In an ever shrinking global talent pool organisations use employer brand to attract and retain talent, however, in the absence of theoretical pointers, many organisations are losing out on a powerful business tool by not developing or maintaining their employer brand correctly. Research purpose: This study explores the current state of knowledge about employer brand and identifies the various employer brand building blocks which are conceptually integrated in a predictive model. Motivation for the study: The need for scientific progress though the accurate representation of a set of employer brand phenomena and propositions, which can be empirically tested, motivated this study. Research design, approach and method: This study was nonempirical in approach and searched for linkages between theoretical concepts by making use of relevant contextual data. Theoretical propositions which explain the identified linkages were developed for purpose of further empirical research. Main findings: Key findings suggested that employer brand is influenced by target group needs, a differentiated Employer Value Proposition (EVP, the people strategy, brand consistency, communication of the employer brand and measurement of Human Resources (HR employer branding efforts. Practical/managerial implications: The predictive model provides corporate leaders and their human resource functionaries a theoretical pointer relative to employer brand which could guide more effective talent attraction and retention decisions. Contribution/value add: This study adds to the small base of research available on employer brand and contributes to both scientific progress as well as an improved practical understanding of factors which influence employer brand.

  20. Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions

    Science.gov (United States)

    Strutzenberg, Louise L.; Liever, Peter A.

    2011-01-01

    This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.

  1. Measurement of the Red Blood Cell Distribution Width Improves the Risk Prediction in Cardiac Resynchronization Therapy

    Directory of Open Access Journals (Sweden)

    András Mihály Boros

    2016-01-01

    Full Text Available Objectives. Increases in red blood cell distribution width (RDW and NT-proBNP (N-terminal pro-B-type natriuretic peptide predict the mortality of chronic heart failure patients undergoing cardiac resynchronization therapy (CRT. It was hypothesized that RDW is independent of and possibly even superior to NT-proBNP from the aspect of long-term mortality prediction. Design. The blood counts and serum NT-proBNP levels of 134 patients undergoing CRT were measured. Multivariable Cox regression models were applied and reclassification analyses were performed. Results. After separate adjustment to the basic model of left bundle branch block, beta blocker therapy, and serum creatinine, both the RDW > 13.35% and NT-proBNP > 1975 pg/mL predicted the 5-year mortality (n=57. In the final model including all variables, the RDW [HR = 2.49 (1.27–4.86; p=0.008] remained a significant predictor, whereas the NT-proBNP [HR = 1.18 (0.93–3.51; p=0.07] lost its predictive value. On addition of the RDW measurement, a 64% net reclassification improvement and a 3% integrated discrimination improvement were achieved over the NT-proBNP-adjusted basic model. Conclusions. Increased RDW levels accurately predict the long-term mortality of CRT patients independently of NT-proBNP. Reclassification analysis revealed that the RDW improves the risk stratification and could enhance the optimal patient selection for CRT.

  2. Trend modelling of wave parameters and application in onboard prediction of ship responses

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam; Jensen, J. Juncher

    2015-01-01

    This paper presents a trend analysis for prediction of sea state parameters onboard shipsduring voyages. Given those parameters, a JONSWAP model and also the transfer functions, prediction of wave induced ship responses are thus made. The procedure is tested with full-scale data of an in-service...... container ship. Comparison between predictions and the actual measurements, implies a good agreementin general. This method can be an efficient way to improve decision support on board ships....

  3. Predictive models for acute kidney injury following cardiac surgery.

    Science.gov (United States)

    Demirjian, Sevag; Schold, Jesse D; Navia, Jose; Mastracci, Tara M; Paganini, Emil P; Yared, Jean-Pierre; Bashour, Charles A

    2012-03-01

    Accurate prediction of cardiac surgery-associated acute kidney injury (AKI) would improve clinical decision making and facilitate timely diagnosis and treatment. The aim of the study was to develop predictive models for cardiac surgery-associated AKI using presurgical and combined pre- and intrasurgical variables. Prospective observational cohort. 25,898 patients who underwent cardiac surgery at Cleveland Clinic in 2000-2008. Presurgical and combined pre- and intrasurgical variables were used to develop predictive models. Dialysis therapy and a composite of doubling of serum creatinine level or dialysis therapy within 2 weeks (or discharge if sooner) after cardiac surgery. Incidences of dialysis therapy and the composite of doubling of serum creatinine level or dialysis therapy were 1.7% and 4.3%, respectively. Kidney function parameters were strong independent predictors in all 4 models. Surgical complexity reflected by type and history of previous cardiac surgery were robust predictors in models based on presurgical variables. However, the inclusion of intrasurgical variables accounted for all explained variance by procedure-related information. Models predictive of dialysis therapy showed good calibration and superb discrimination; a combined (pre- and intrasurgical) model performed better than the presurgical model alone (C statistics, 0.910 and 0.875, respectively). Models predictive of the composite end point also had excellent discrimination with both presurgical and combined (pre- and intrasurgical) variables (C statistics, 0.797 and 0.825, respectively). However, the presurgical model predictive of the composite end point showed suboptimal calibration (P predictive models in other cohorts is required before wide-scale application. We developed and internally validated 4 new models that accurately predict cardiac surgery-associated AKI. These models are based on readily available clinical information and can be used for patient counseling, clinical

  4. Modeling number of claims and prediction of total claim amount

    Science.gov (United States)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  5. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  6. Numerical predictions of particle dispersed two-phase flows, using the LSD and SSF models

    International Nuclear Information System (INIS)

    Avila, R.; Cervantes de Gortari, J.; Universidad Nacional Autonoma de Mexico, Mexico City. Facultad de Ingenieria)

    1988-01-01

    A modified version of a numerical scheme which is suitable to predict parabolic dispersed two-phase flow, is presented. The original version of this scheme was used to predict the test cases discussed during the 3rd workshop on TPF predictions in Belgrade, 1986. In this paper, two particle dispersion models are included which use the Lagrangian approach predicting test case 1 and 3 of the 4th workshop. For the prediction of test case 1 the Lagrangian Stochastic Deterministic model (LSD) is used providing acceptable good results of mean and turbulent quantities for both solid and gas phases; however, the computed void fraction distribution is not in agreement with the measurements at locations away from the inlet, especially near the walls. Test case 3 is predicted using both the LSD and the Stochastic Separated Flow (SSF) models. It was found that the effects of turbulence modulation are large when the LSD model is used, whereas the particles have a negligible influence on the continuous phase if the SSF model is utilized for the computations. Predictions of gas phase properties based on both models agree well with measurements; however, the agreement between calculated and measured solid phase properties is less satisfactory. (orig.)

  7. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...

  8. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Science.gov (United States)

    Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  9. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  10. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  11. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  12. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  14. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  15. Predicting changes in volcanic activity through modelling magma ascent rate.

    Science.gov (United States)

    Thomas, Mark; Neuberg, Jurgen

    2013-04-01

    It is a simple fact that changes in volcanic activity happen and in retrospect they are easy to spot, the dissimilar eruption dynamics between an effusive and explosive event are not hard to miss. However to be able to predict such changes is a much more complicated process. To cause altering styles of activity we know that some part or combination of parts within the system must vary with time, as if there is no physical change within the system, why would the change in eruptive activity occur? What is unknown is which parts or how big a change is needed. We present the results of a suite of conduit flow models that aim to answer these questions by assessing the influence of individual model parameters such as the dissolved water content or magma temperature. By altering these variables in a systematic manner we measure the effect of the changes by observing the modelled ascent rate. We use the ascent rate as we believe it is a very important indicator that can control the style of eruptive activity. In particular, we found that the sensitivity of the ascent rate to small changes in model parameters surprising. Linking these changes to observable monitoring data in a way that these data could be used as a predictive tool is the ultimate goal of this work. We will show that changes in ascent rate can be estimated by a particular type of seismicity. Low frequency seismicity, thought to be caused by the brittle failure of melt is often linked with the movement of magma within a conduit. We show that acceleration in the rate of low frequency seismicity can correspond to an increase in the rate of magma movement and be used as an indicator for potential changes in eruptive activity.

  16. Decentralized robust nonlinear model predictive controller for unmanned aerial systems

    Science.gov (United States)

    Garcia Garreton, Gonzalo A.

    The nonlinear and unsteady nature of aircraft aerodynamics together with limited practical range of controls and state variables make the use of the linear control theory inadequate especially in the presence of external disturbances, such as wind. In the classical approach, aircraft are controlled by multiple inner and outer loops, designed separately and sequentially. For unmanned aerial systems in particular, control technology must evolve to a point where autonomy is extended to the entire mission flight envelope. This requires advanced controllers that have sufficient robustness, track complex trajectories, and use all the vehicles control capabilities at higher levels of accuracy. In this work, a robust nonlinear model predictive controller is designed to command and control an unmanned aerial system to track complex tight trajectories in the presence of internal and external perturbance. The Flight System developed in this work achieves the above performance by using: 1. A nonlinear guidance algorithm that enables the vehicle to follow an arbitrary trajectory shaped by moving points; 2. A formulation that embeds the guidance logic and trajectory information in the aircraft model, avoiding cross coupling and control degradation; 3. An artificial neural network, designed to adaptively estimate and provide aerodynamic and propulsive forces in real-time; and 4. A mixed sensitivity approach that enhances the robustness for a nonlinear model predictive controller overcoming the effect of un-modeled dynamics, external disturbances such as wind, and measurement additive perturbations, such as noise and biases. These elements have been integrated and tested in simulation and with previously stored flight test data and shown to be feasible.

  17. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  18. Femtocells Sharing Management using mobility prediction model

    OpenAIRE

    Barth, Dominique; Choutri, Amira; Kloul, Leila; Marcé, Olivier

    2013-01-01

    Bandwidth sharing paradigm constitutes an incentive solution for the serious capacity management problem faced by operators as femtocells owners are able to offer a QoS guaranteed network access to mobile users in their femtocell coverage. In this paper, we consider a technico-economic bandwidth sharing model based on a reinforcement learning algorithm. Because such a model does not allow the convergence of the learning algorithm, due to the small size of the femtocells, the mobile users velo...

  19. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  20. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on