WorldWideScience

Sample records for model predictions measurements

  1. Space Weather: Measurements, Models and Predictions

    Science.gov (United States)

    2014-03-21

    and record high levels of cosmic ray flux. There were broad-ranging terrestrial responses to this inactivity of the Sun. BC was involved in the...techniques for converting from one coordinate system (e.g., the invariant coordinate system used for the model) to another (e.g., the latitude- radius

  2. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  3. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...... by the effective wind speed on the rotor disc. We take the wind speed as a scheduling variable. The wind speed is measurable ahead of the turbine using LIDARs, therefore, the scheduling variable is known for the entire prediction horizon. By taking the advantage of having future values of the scheduling variable...... on wind speed estimation and measurements from the LIDAR is devised to find an estimate of the delay and compensate for it before it is used in the controller. Comparisons between the MPC with error compensation, the MPC without error compensation and an MPC with re-linearization at each sample point...

  4. Fuel Conditioning Facility Electrorefiner Model Predictions versus Measurements

    Energy Technology Data Exchange (ETDEWEB)

    D Vaden

    2007-10-01

    Electrometallurgical treatment of spent nuclear fuel is performed in the Fuel Conditioning Facility (FCF) at the Idaho National Laboratory (INL) by electrochemically separating uranium from the fission products and structural materials in a vessel called an electrorefiner (ER). To continue processing without waiting for sample analyses to assess process conditions, an ER process model predicts the composition of the ER inventory and effluent streams via multicomponent, multi-phase chemical equilibrium for chemical reactions and a numerical solution to differential equations for electro-chemical transport. The results of the process model were compared to the electrorefiner measured data.

  5. Cs-137 fallout in Iceland, model predictions and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Palsson, S.E.; Sigurgeirsson, M.A.; Gudnason, K. [Icelandic Radiation Protection Inst. (Iceland); Arnalds, O.; Karlsdottir, I.A. [Agricultural Research Inst. (Iceland); Palsdottir, P. [Icelandic Meteorological Office (Iceland)

    2002-04-01

    Basically all the fallout Cs-137 in Iceland came from the atmospheric nuclear weapons tests in the late fifties and early sixties, the addition from the accident in the Chernobyl Nuclear Power Plant was relatively very small. Measurements of fallout from nuclear weapons tests started in Iceland over 40 years ago and samples of soil, vegetation and agricultural products have been collected from various places and measured during this period. Considerable variability has been seen in the results, even between places close to each other. This is understandable due to the mountainous terrain, changing strong winds and high levels of precipitation. This variability has been especially noticeable in the case of soil samples. The important role of uncultivated rangelands in Icelandic agriculture (e.g. for sheep farming) makes it necessary to estimate deposition for many remote areas. It has thus proven difficult to get a good overview of the distribution of the deposition and its subsequent transfer into agricultural products. Over a year ago an attempt was made to assess the distribution of Cs-137 fallout in Iceland. The approach is based on a model predicting deposition using precipitation data, in a similar manner as that used previously within the Arctic Monitoring and Assessment Programme (AMAP). 1999). One station close to Reykjavik has a time series of Cs-137 deposition data and precipitation data from 1960 onwards. The AMAP deposition model was calibrated for Iceland by using deposition and precipitation data from this station. (au)

  6. Measures and Models for Estimating and Predicting Cognitive Fatigue

    Science.gov (United States)

    Trejo, Leonard J.; Kochavi, Rebekah; Kubitz, Karla; Montgomery, Leslie D.; Rosipal, Roman; Matthews, Bryan

    2004-01-01

    We analyzed EEG and ERPs in a fatiguing mental task and created statistical models for single subjects. Seventeen subjects (4 F, 18-38 y) viewed 4-digit problems (e.g., 3+5-2+7=15) on a computer, solved the problems, and pressed keys to respond (intertrial interval = 1 s). Subjects performed until either they felt exhausted or three hours had elapsed. Re- and post-task measures of mood (Activation Deactivation Adjective Checklist, Visual Analogue Mood Scale) confirmed that fatigue increased and energy decreased over time. We tested response times (RT); amplitudes of ERP components N1, P2, P300, readiness potentials; and amplitudes of frontal theta and parietal alpha rhythms for change as a function of time. For subjects who completed 3 h (n=9) we analyzed 12 15-min blocks. For subjects who completed at least 1.5 h (n=17), we analyzed the first-, middle-, and last 100 error-free trials. Mean RT rose from 6.7 s to 8.5 s over time. We found no changes in the amplitudes of ERP components. In both analyses, amplitudes of frontal theta and parietal alpha rose by 30% or more over time. We used 30-channel EEG frequency spectra to model the effects of time in single subjects using a kernel partial least squares classifier. We classified 3.5s EEG segments as being from the first 100 or the last 100 trials, using random sub-samples of each class. Test set accuracies ranged from 63.9% to 99.6% correct. Only 2 of 17 subjects had mean accuracies lower than 80%. The results suggest that EEG accurately classifies periods of cognitive fatigue in 90% of subjects.

  7. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local cont

  8. Model Predictive Control for Integrating Traffic Control Measures

    NARCIS (Netherlands)

    Hegyi, A.

    2004-01-01

    Dynamic traffic control measures, such as ramp metering and dynamic speed limits, can be used to better utilize the available road capacity. Due to the increasing traffic volumes and the increasing number of traffic jams the interaction between the control measures has increased such that local

  9. THM Model Validation: Integrated Assessment of Measured and Predicted Behavior

    Energy Technology Data Exchange (ETDEWEB)

    Blair, S C; Carlson, S R; Wagoner, J; Wagner, R; Vogt, T

    2001-10-10

    This paper presents results of coupled thermal-hydrological-mechanical (THM) simulations of two field-scale tests that are part of the thermal testing program being conducted by the Yucca Mountain Site Characterization Project. The two tests analyzed are the Drift-Scale Test (DST) which is sited in an alcove of the Exploratory Studies Facility at Yucca Mountain, Nevada, and the Large Block Test (LBT) which is sited at Fran Ridge, near Yucca Mountain, Nevada. Both of these tests were designed to investigate coupled thermal-mechanical-hydrological-chemical (TMHC) behavior in a fractured, densely welded ash-flow tuff. The geomechanical response of the rock mass forming the DST and the LBT is analyzed using a coupled THM model. A coupled model for analysis of the DST and LBT has been formulated by linking the 3DEC distinct element code for thermal-mechanical analysis and the NUFT finite element code for thermal-hydrologic analysis. The TH model (NUFT) computes temperatures at preselected times using a model that extends from the surface to the water table. The temperatures computed by NUFT are input to 3DEC, which then computes stresses and deformations. The distinct element method was chosen to permit the inclusion of discrete fractures and explicit modeling of fracture deformations. Shear deformations and normal mode opening of fractures are expected to increase fracture permeability and thereby alter thermal hydrologic behavior in these tests. We have collected fracture data for both the DST and the LBT and have used these data in the formulation of the model of the test. This paper presents a brief discussion of the model formulation, along with comparison of simulated and observed deformations at selected locations within the tests.

  10. Measuring thrust and predicting trajectory in model rocketry

    CERN Document Server

    Courtney, Michael

    2009-01-01

    Methods are presented for measuring thrust using common force sensors and data acquisition to construct a dynamic force plate. A spreadsheet can be used to compute trajectory by integrating the equations of motion numerically. These techniques can be used in college physics courses, and have also been used with high school students concurrently enrolled in algebra 2.

  11. Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night-Vision Goggles

    Science.gov (United States)

    2013-11-01

    Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night - Vision Goggles by Jeremy Gaston, Tim Mermagen, and...SUBTITLE Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night - Vision Goggles 5a. CONTRACT NUMBER 5b. GRANT NUMBER...13. SUPPLEMENTARY NOTES 14. ABSTRACT This study evaluates two different night - vision goggles (NVGs) to determine if the devices meet level II

  12. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    methods.In the comparison between the model test results and the theoretical predictions, thehydro-dynamic damping proves to be the most important uncertain parameter. It is shown thata relative large hydrodynamic damping must be assumed in the theoretical calculations in orderto predict the measured...

  13. Hybrid predictions of railway induced ground vibration using a combination of experimental measurements and numerical modelling

    Science.gov (United States)

    Kuo, K. A.; Verbraken, H.; Degrande, G.; Lombaert, G.

    2016-07-01

    Along with the rapid expansion of urban rail networks comes the need for accurate predictions of railway induced vibration levels at grade and in buildings. Current computational methods for making predictions of railway induced ground vibration rely on simplifying modelling assumptions and require detailed parameter inputs, which lead to high levels of uncertainty. It is possible to mitigate against these issues using a combination of field measurements and state-of-the-art numerical methods, known as a hybrid model. In this paper, two hybrid models are developed, based on the use of separate source and propagation terms that are quantified using in situ measurements or modelling results. These models are implemented using term definitions proposed by the Federal Railroad Administration and assessed using the specific illustration of a surface railway. It is shown that the limitations of numerical and empirical methods can be addressed in a hybrid procedure without compromising prediction accuracy.

  14. Wideband impedance measurements and modeling of DC motors for EMI predictions

    NARCIS (Netherlands)

    Diouf, F.; Leferink, Frank Bernardus Johannes; Duval, Fabrice; Bensetti, Mohamed

    2015-01-01

    In electromagnetic interference prediction, dc motors are usually modeled as a source and a series impedance. Previous researches only include the impedance of the armature, while neglecting the effect of the motor's rotation. This paper aims at measuring and modeling the wideband impedance of a dc

  15. Assessing the performance of prediction models: a framework for traditional and novel measures

    DEFF Research Database (Denmark)

    Steyerberg, Ewout W; Vickers, Andrew J; Cook, Nancy R;

    2010-01-01

    The performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver...

  16. Wideband impedance measurements and modeling of DC motors for EMI predictions

    NARCIS (Netherlands)

    Diouf, Fatou; Leferink, Frank; Duval, Fabrice; Bensetti, Mohamed

    2015-01-01

    In electromagnetic interference prediction, dc motors are usually modeled as a source and a series impedance. Previous researches only include the impedance of the armature, while neglecting the effect of the motor's rotation. This paper aims at measuring and modeling the wideband impedance of a dc

  17. Predictability of the geospace variations and measuring the capability to model the state of the system

    Science.gov (United States)

    Pulkkinen, A.

    2012-12-01

    Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble

  18. MEASURED CONCENTRATIONS OF HERBICIDES AND MODEL PREDICTIONS OF ATRAZINE FATE IN THE PATUXENT RIVER ESTUARY

    Science.gov (United States)

    McConnell, Laura L., Jennifer A. Harman-Fetcho and James D. Hagy, III. 2004. Measured Concentrations of Herbicides and Model Predictions of Atrazine Fate in the Patuxent River Estuary. J. Environ. Qual. 33(2):594-604. (ERL,GB X1051). The environmental fate of herbicides i...

  19. Comparison of model predictions for coherence length to in-flight measurements at cruise conditions

    Science.gov (United States)

    Haxter, Stefan; Spehr, Carsten

    2017-03-01

    In this paper, we will focus on coherence lengths of pressure fluctuations underneath a turbulent boundary layer on an actual aircraft measured during a flight test. Coherence lengths of pressure fluctuations have already been measured in the past and various models have been set up in order to predict the values. However, most of the underlying data were measured at Mach numbers and pressures different from our region of interest and it is not known if the models are applicable. In some of the investigations also unknown alignment procedures between array and flow were used and it will be shown that this can have a considerable influence on the result. We have performed flight tests at cruising speed and altitude in which we took due account of this alignment by means of an array processing technique which is capable of determining the flow direction for each frequency bin under consideration. In this paper one of the data points will be evaluated and compared to the prediction models. From the differences and subsequently from the adopted run conditions for the measurement of the data of the models, several conclusions are drawn concerning scaling effects and importance of alignment. Also, two of the prediction models are adjusted to our measurements.

  20. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  1. Confronting model predictions of carbon fluxes with measurements of Amazon forests subjected to experimental drought.

    Science.gov (United States)

    Powell, Thomas L; Galbraith, David R; Christoffersen, Bradley O; Harper, Anna; Imbuzeiro, Hewlley M A; Rowland, Lucy; Almeida, Samuel; Brando, Paulo M; da Costa, Antonio Carlos Lola; Costa, Marcos Heil; Levine, Naomi M; Malhi, Yadvinder; Saleska, Scott R; Sotta, Eleneide; Williams, Mathew; Meir, Patrick; Moorcroft, Paul R

    2013-10-01

    Considerable uncertainty surrounds the fate of Amazon rainforests in response to climate change. Here, carbon (C) flux predictions of five terrestrial biosphere models (Community Land Model version 3.5 (CLM3.5), Ecosystem Demography model version 2.1 (ED2), Integrated BIosphere Simulator version 2.6.4 (IBIS), Joint UK Land Environment Simulator version 2.1 (JULES) and Simple Biosphere model version 3 (SiB3)) and a hydrodynamic terrestrial ecosystem model (the Soil-Plant-Atmosphere (SPA) model) were evaluated against measurements from two large-scale Amazon drought experiments. Model predictions agreed with the observed C fluxes in the control plots of both experiments, but poorly replicated the responses to the drought treatments. Most notably, with the exception of ED2, the models predicted negligible reductions in aboveground biomass in response to the drought treatments, which was in contrast to an observed c. 20% reduction at both sites. For ED2, the timing of the decline in aboveground biomass was accurate, but the magnitude was too high for one site and too low for the other. Three key findings indicate critical areas for future research and model development. First, the models predicted declines in autotrophic respiration under prolonged drought in contrast to measured increases at one of the sites. Secondly, models lacking a phenological response to drought introduced bias in the sensitivity of canopy productivity and respiration to drought. Thirdly, the phenomenological water-stress functions used by the terrestrial biosphere models to represent the effects of soil moisture on stomatal conductance yielded unrealistic diurnal and seasonal responses to drought.

  2. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    Directory of Open Access Journals (Sweden)

    S. Pande

    2015-04-01

    Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  3. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  4. Comparisons Between Model Predictions and Spectral Measurements of Charged and Neutral Particles on the Martian Surface

    Science.gov (United States)

    Kim, Myung-Hee Y.; Cucinotta, Francis A.; Zeitlin, Cary; Hassler, Donald M.; Ehresmann, Bent; Rafkin, Scot C. R.; Wimmer-Schweingruber, Robert F.; Boettcher, Stephan; Boehm, Eckart; Guo, Jingnan; hide

    2014-01-01

    Detailed measurements of the energetic particle radiation environment on the surface of Mars have been made by the Radiation Assessment Detector (RAD) on the Curiosity rover since August 2012. RAD is a particle detector that measures the energy spectrum of charged particles (10 to approx. 200 MeV/u) and high energy neutrons (approx 8 to 200 MeV). The data obtained on the surface of Mars for 300 sols are compared to the simulation results using the Badhwar-O'Neill galactic cosmic ray (GCR) environment model and the high-charge and energy transport (HZETRN) code. For the nuclear interactions of primary GCR through Mars atmosphere and Curiosity rover, the quantum multiple scattering theory of nuclear fragmentation (QMSFRG) is used. For describing the daily column depth of atmosphere, daily atmospheric pressure measurements at Gale Crater by the MSL Rover Environmental Monitoring Station (REMS) are implemented into transport calculations. Particle flux at RAD after traversing varying depths of atmosphere depends on the slant angles, and the model accounts for shielding of the RAD "E" dosimetry detector by the rest of the instrument. Detailed comparisons between model predictions and spectral data of various particle types provide the validation of radiation transport models, and suggest that future radiation environments on Mars can be predicted accurately. These contributions lend support to the understanding of radiation health risks to astronauts for the planning of various mission scenarios

  5. Pesticide volatilization from soil: lysimeter measurements versus predictions of European registration models.

    Science.gov (United States)

    Wolters, André; Linnemann, Volker; Herbst, Michael; Klein, Michael; Schäffer, Andreas; Vereecken, Harry

    2003-01-01

    A comparison was drawn between model predictions and experimentally determined volatilization rates to evaluate the volatilization approaches of European registration models. Volatilization rates of pesticides (14C-labeled parathion-methyl, fenpropimorph, and terbuthylazine and nonlabeled chlorpyrifos) were determined in a wind-tunnel experiment after simultaneous soil surface application on Gleyic Cambisol. Both continuous air sampling, which quantifies volatile losses of 14C-organic compounds and 14CO2 separately, and the detection of soil residues allow for a mass balance of radioactivity of the 14C-labeled pesticides. Recoveries were found to be > 94% of the applied radioactivity. The following descending order of cumulative volatilization was observed: chlorpyrifos > parathion-methyl > terbuthylazine > fenpropimorph. Due to its high air-water partitioning coefficient, nonlabeled chlorpyrifos was found to have the highest cumulative volatilization (44.4%) over the course of the experiment. Volatilization flux rates were measured up to 993 microg m(-2) h(-1) during the first hours after application. Parameterization of the Pesticide Emission Assessment at Regional and Local Scales (PEARL) model and the Pesticide Leaching Model (PELMO) was performed to mirror the experimental boundary conditions. In general, model predictions deviated markedly from measured volatilization rates and showed limitations of current volatilization models, such as the uppermost compartment thickness, making an enormous influence on predicted volatilization losses. Experimental findings revealed soil moisture to be an important factor influencing volatilization from soil, yet its influence was not reflected by the model calculations. Future versions of PEARL and PELMO ought to include improved descriptions of aerodynamic resistances and soil moisture dependent soil-air partitioning coefficients.

  6. Prediction and measurement of low-frequency harmonic noise of a hovering model helicopter rotor

    Science.gov (United States)

    Aggarawal, H. R.; Schmitz, F. H.; Boxwell, D. A.

    Far-field acoustic data for a model helicopter rotor have been gathered in a large open-jet, acoustically treated wind tunnel with the rotor operating in hover and out of ground-effect. The four-bladed Boeing 360 model rotor with advanced airfoils, planform, and tip shape was run over a range of conditions typical of today's modern helicopter main rotor. Near in-plane acoustic measurements were compared with two independent implementations of classical linear theory. Measured steady thrust and torque were used together with a free-wake analysis (to predict the thrust and drag distributions along the rotor radius) as input to this first-principles theoretical approach. Good agreement between theory and experiment was shown for both amplitude and phase for measurements made in those positions that minimized distortion of the radiated acoustic signature at low-frequencies.

  7. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  8. Comparison of model measured runner blade pressure fluctuations with unsteady flow analysis predictions

    Science.gov (United States)

    Magnoli, M. V.

    2016-11-01

    An accurate prediction of pressure fluctuations in Francis turbines has become more and more important over the last years, due to the continuously increasing requirements of wide operating range capability. Depending on the machine operator, Francis turbines are operated at full load, part load, deep part load and speed-no-load. Each of these operating conditions is associated with different flow phenomena and pressure fluctuation levels. The better understanding of the pressure fluctuation phenomena and the more accurate prediction of their amplitude along the hydraulic surfaces can significantly contribute to improve the hydraulic and mechanical design of Francis turbines, their hydraulic stability and their reliability. With the objective to acquire a deeper knowledge about the pressure fluctuation characteristics in Francis turbines and to improve the accuracy of numerical simulation methods used for the prediction of the dynamic fluid flow through the turbine, pressure fluctuations were experimentally measured in a mid specific speed model machine. The turbine runner of a model machine with specific speed around nq,opt = 60 min-1, was instrumented with dynamic pressure transducers at the runner blades. The model machine shaft was equipped with a telemetry system able to transmit the measured pressure values to the data acquisition system. The transient pressure signal was measured at multiple locations on the blade and at several operating conditions. The stored time signal was also evaluated in terms of characteristic amplitude and dominating frequency. The dynamic fluid flow through the hydraulic turbine was numerically simulated with computational fluid dynamics (CFD) for selected operating points. Among others, operating points at full load, part load and deep part load were calculated. For the fluid flow numerical simulations more advanced turbulence models were used, such as the detached eddy simulation (DES) and scale adaptive simulation (SAS). At the

  9. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms

    Directory of Open Access Journals (Sweden)

    Clark Kogan

    2016-01-01

    Full Text Available In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  10. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms.

    Science.gov (United States)

    Kogan, Clark; Kalachev, Leonid; Van Dongen, Hans P A

    2016-01-01

    In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables) by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary) response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  11. A comparison of measurements and CFD model predictions for pollutant dispersion in cities.

    Science.gov (United States)

    Pospisil, J; Katolicky, J; Jicha, M

    2004-12-01

    An accurate description of car movements in an urban area is required for accurate prediction of the air pollution concentration field. A 3-D Eulerian-Lagrangian approach to moving vehicles that takes into account the traffic-induced flow field and turbulence is presented. The approach is based on Computational Fluid Dynamics (CFD) calculations using Eulerian approach to the continuous phase and Lagrangian approach to the discrete phase of moving objects-vehicles. In the first part of the present contribution, the method is applied to pollutants dispersion in a city tunnel outlet in Brno and to a street structure in Hannover, Germany. In the second part, a model of traffic dynamics inside a street intersection in the centre of Brno is presented. This model accounts for the dynamics of traffic lights and a corresponding traffic-generated flow field and emissions in different time intervals during the traffic light sequence. All results of numerical modelling are compared with field measurements with very good agreement. A commercial CFD code StarCD was used into which the Lagrangian model and traffic dynamics model were integrated.

  12. Added value of serum hormone measurements in risk prediction models for breast cancer for women not using exogenous hormones

    DEFF Research Database (Denmark)

    Hüsing, Anika; Fortner, Renée T; Kühn, Tilman

    2017-01-01

    PURPOSE: Circulating hormone concentrations are associated with breast cancer risk, with well-established associations for postmenopausal women. Biomarkers may represent minimally invasive measures to improve risk prediction models. EXPERIMENTAL DESIGN: We evaluated improvements in discrimination...

  13. Development of Neural Network Model for Predicting Peak Ground Acceleration Based on Microtremor Measurement and Soil Boring Test Data

    National Research Council Canada - National Science Library

    Kerh, T; Lin, J. S; Gunaratnam, D

    2012-01-01

    .... This paper is therefore aimed at developing a neural network model, based on available microtremor measurement and on-site soil boring test data, for predicting peak ground acceleration at a site...

  14. Models for measuring and predicting shareholder value: A study of third party software service providers

    Indian Academy of Sciences (India)

    N Viswanadham; Poornima Luthra

    2005-04-01

    In this study, we use the strategic profit model (SPM) and the economic value-added (EVA to measure shareholder value). SPM measures the return on net worth (RONW) which is defined as the return on assets (ROA) multiplied by the financial leverage. EVA is defined as the firm’s net operating profit after taxes (NOPAT) minus the capital charge. Both, RONW and EVA provide an indication of how much shareholder value a firm creates for its shareholders, year on year. With the increasing focus on creation of shareholder value and core competencies, many companies are outsourcing their information technology (IT) related activities to third party software companies. Indian software companies have become leaders in providing these services. Companies from several other countries are also competing for the top slot. We use the SPM and EVA models to analyse the four listed players of the software industry using the publicly available published data. We compare the financial data obtained from the models, and use peer average data to provide customized recommendations for each company to improve their shareholder value. Assuming that the companies follow these rules, we also predict future RONW and EVA for the companies for the financial year 2005. Finally, we make several recommendations to software providers for effectively competing in the global arena.

  15. PIV-measured versus CFD-predicted flow dynamics in anatomically realistic cerebral aneurysm models.

    Science.gov (United States)

    Ford, Matthew D; Nikolov, Hristo N; Milner, Jaques S; Lownie, Stephen P; Demont, Edwin M; Kalata, Wojciech; Loth, Francis; Holdsworth, David W; Steinman, David A

    2008-04-01

    Computational fluid dynamics (CFD) modeling of nominally patient-specific cerebral aneurysms is increasingly being used as a research tool to further understand the development, prognosis, and treatment of brain aneurysms. We have previously developed virtual angiography to indirectly validate CFD-predicted gross flow dynamics against the routinely acquired digital subtraction angiograms. Toward a more direct validation, here we compare detailed, CFD-predicted velocity fields against those measured using particle imaging velocimetry (PIV). Two anatomically realistic flow-through phantoms, one a giant internal carotid artery (ICA) aneurysm and the other a basilar artery (BA) tip aneurysm, were constructed of a clear silicone elastomer. The phantoms were placed within a computer-controlled flow loop, programed with representative flow rate waveforms. PIV images were collected on several anterior-posterior (AP) and lateral (LAT) planes. CFD simulations were then carried out using a well-validated, in-house solver, based on micro-CT reconstructions of the geometries of the flow-through phantoms and inlet/outlet boundary conditions derived from flow rates measured during the PIV experiments. PIV and CFD results from the central AP plane of the ICA aneurysm showed a large stable vortex throughout the cardiac cycle. Complex vortex dynamics, captured by PIV and CFD, persisted throughout the cardiac cycle on the central LAT plane. Velocity vector fields showed good overall agreement. For the BA, aneurysm agreement was more compelling, with both PIV and CFD similarly resolving the dynamics of counter-rotating vortices on both AP and LAT planes. Despite the imposition of periodic flow boundary conditions for the CFD simulations, cycle-to-cycle fluctuations were evident in the BA aneurysm simulations, which agreed well, in terms of both amplitudes and spatial distributions, with cycle-to-cycle fluctuations measured by PIV in the same geometry. The overall good agreement

  16. Air quality measurements versus model predictions: a case study for the Sugozu power plant

    Energy Technology Data Exchange (ETDEWEB)

    A. Korur; C. Derinoz; C. Yurteri [ENVY Energy and Environmental Investments Inc., Ankara (Turkey)

    2003-07-01

    Air quality modeling is one of the tools used in Environmental Impact Assessment (EIA) studies in order to predict the potential impacts of atmospheric emissions. The main advantage of air quality modeling is the simulation of the ground-level concentrations under different conditions (i.e., meteorological variations and other pollutant sources in the vicinity). The accuracy of model predictions, on the other hand, depends mainly on the quality of the input data reflecting meteorological and topographical conditions as well as emission sources. In this regard, the model predictions should be supported with monitoring data. In the paper, the predictions of Gaussian air dispersion model (Industrial Source Complex - ISC) for SO{sub 2} and NO{sub 2} carried out in the vicinity of the Sugozu Power Plant on the coast of Turkey are compared with the air quality monitoring results of the same region. 2 refs., 3 figs., 2 tabs.

  17. Two-dimensional NMR measurement and point dipole model prediction of paramagnetic shift tensors in solids

    Energy Technology Data Exchange (ETDEWEB)

    Walder, Brennan J.; Davis, Michael C.; Grandinetti, Philip J. [Department of Chemistry, Ohio State University, 100 West 18th Avenue, Columbus, Ohio 43210 (United States); Dey, Krishna K. [Department of Physics, Dr. H. S. Gour University, Sagar, Madhya Pradesh 470003 (India); Baltisberger, Jay H. [Division of Natural Science, Mathematics, and Nursing, Berea College, Berea, Kentucky 40403 (United States)

    2015-01-07

    A new two-dimensional Nuclear Magnetic Resonance (NMR) experiment to separate and correlate the first-order quadrupolar and chemical/paramagnetic shift interactions is described. This experiment, which we call the shifting-d echo experiment, allows a more precise determination of tensor principal components values and their relative orientation. It is designed using the recently introduced symmetry pathway concept. A comparison of the shifting-d experiment with earlier proposed methods is presented and experimentally illustrated in the case of {sup 2}H (I = 1) paramagnetic shift and quadrupolar tensors of CuCl{sub 2}⋅2D{sub 2}O. The benefits of the shifting-d echo experiment over other methods are a factor of two improvement in sensitivity and the suppression of major artifacts. From the 2D lineshape analysis of the shifting-d spectrum, the {sup 2}H quadrupolar coupling parameters are 〈C{sub q}〉 = 118.1 kHz and 〈η{sub q}〉 = 0.88, and the {sup 2}H paramagnetic shift tensor anisotropy parameters are 〈ζ{sub P}〉 = − 152.5 ppm and 〈η{sub P}〉 = 0.91. The orientation of the quadrupolar coupling principal axis system (PAS) relative to the paramagnetic shift anisotropy principal axis system is given by (α,β,γ)=((π)/2 ,(π)/2 ,0). Using a simple ligand hopping model, the tensor parameters in the absence of exchange are estimated. On the basis of this analysis, the instantaneous principal components and orientation of the quadrupolar coupling are found to be in excellent agreement with previous measurements. A new point dipole model for predicting the paramagnetic shift tensor is proposed yielding significantly better agreement than previously used models. In the new model, the dipoles are displaced from nuclei at positions associated with high electron density in the singly occupied molecular orbital predicted from ligand field theory.

  18. The Predictive Value of Subjective Labour Supply Data: A Dynamic Panel Data Model with Measurement Error

    OpenAIRE

    Euwals, Rob

    2002-01-01

    This paper tests the predictive value of subjective labour supply data for adjustments in working hours over time. The idea is that if subjective labour supply data help to predict next year?s working hours, such data must contain at least some information on individual labour supply preferences. This informational content can be crucial to identify models of labour supply. Furthermore, it can be crucial to investigate the need for, or, alternatively, the support for laws and collective agree...

  19. Predicting vehicular emissions in high spatial resolution using pervasively measured transportation data and microscopic emissions model

    Energy Technology Data Exchange (ETDEWEB)

    Nyhan, Marguerite; Sobolevsky, Stanislav; Kang, Chaogui; Robinson, Prudence; Corti, Andrea; Szell, Michael; Streets, David; Lu, Zifeng; Britter, Rex; Barrett, Steven R. H.; Ratti, Carlo

    2016-06-07

    Air pollution related to traffic emissions pose an especially significant problem in cities; this is due to its adverse impact on human health and well-being. Previous studies which have aimed to quantify emissions from the transportation sector have been limited by either simulated or coarsely resolved traffic volume data. Emissions inventories form the basis of urban pollution models, therefore in this study, Global Positioning System (GPS) trajectory data from a taxi fleet of over 15,000 vehicles were analyzed with the aim of predicting air pollution emissions for Singapore. This novel approach enabled the quantification of instantaneous drive cycle parameters in high spatio-temporal resolution, which provided the basis for a microscopic emissions model. Carbon dioxide (CO2), nitrogen oxides (NOx), volatile organic compounds (VOCs) and particulate matter (PM) emissions were thus estimated. Highly localized areas of elevated emissions levels were identified, with a spatio-temporal precision not possible with previously used methods for estimating emissions. Relatively higher emissions areas were mainly concentrated in a few districts that were the Singapore Downtown Core area, to the north of the central urban region and to the east of it. Daily emissions quantified for the total motor vehicle population of Singapore were found to be comparable to another emissions dataset Results demonstrated that high resolution spatio-temporal vehicle traces detected using GPS in large taxi fleets could be used to infer highly localized areas of elevated acceleration and air pollution emissions in cities, and may become a complement to traditional emission estimates, especially in emerging cities and countries where reliable fine-grained urban air quality data is not easily available. This is the first study of its kind to investigate measured microscopic vehicle movement in tandem with microscopic emissions modeling for a substantial study domain.

  20. Predicting vehicular emissions in high spatial resolution using pervasively measured transportation data and microscopic emissions model

    Science.gov (United States)

    Nyhan, Marguerite; Sobolevsky, Stanislav; Kang, Chaogui; Robinson, Prudence; Corti, Andrea; Szell, Michael; Streets, David; Lu, Zifeng; Britter, Rex; Barrett, Steven R. H.; Ratti, Carlo

    2016-09-01

    Air pollution related to traffic emissions pose an especially significant problem in cities; this is due to its adverse impact on human health and well-being. Previous studies which have aimed to quantify emissions from the transportation sector have been limited by either simulated or coarsely resolved traffic volume data. Emissions inventories form the basis of urban pollution models, therefore in this study, Global Positioning System (GPS) trajectory data from a taxi fleet of over 15,000 vehicles were analyzed with the aim of predicting air pollution emissions for Singapore. This novel approach enabled the quantification of instantaneous drive cycle parameters in high spatio-temporal resolution, which provided the basis for a microscopic emissions model. Carbon dioxide (CO2), nitrogen oxides (NOx), volatile organic compounds (VOCs) and particulate matter (PM) emissions were thus estimated. Highly localized areas of elevated emissions levels were identified, with a spatio-temporal precision not possible with previously used methods for estimating emissions. Relatively higher emissions areas were mainly concentrated in a few districts that were the Singapore Downtown Core area, to the north of the central urban region and to the east of it. Daily emissions quantified for the total motor vehicle population of Singapore were found to be comparable to another emissions dataset. Results demonstrated that high-resolution spatio-temporal vehicle traces detected using GPS in large taxi fleets could be used to infer highly localized areas of elevated acceleration and air pollution emissions in cities, and may become a complement to traditional emission estimates, especially in emerging cities and countries where reliable fine-grained urban air quality data is not easily available. This is the first study of its kind to investigate measured microscopic vehicle movement in tandem with microscopic emissions modeling for a substantial study domain.

  1. Predicting Group-Level Outcome Variables from Variables Measured at the Individual Level: A Latent Variable Multilevel Model

    Science.gov (United States)

    Croon, Marcel A.; van Veldhoven, Marc J. P. M.

    2007-01-01

    In multilevel modeling, one often distinguishes between macro-micro and micro-macro situations. In a macro-micro multilevel situation, a dependent variable measured at the lower level is predicted or explained by variables measured at that lower or a higher level. In a micro-macro multilevel situation, a dependent variable defined at the higher…

  2. Construction of a Quadratic Model for Predicted and Measured Global Solar Radiation in Chile

    Institute of Scientific and Technical Information of China (English)

    Ercan YILMAZ; Beatriz CANCINO; Edmundo LOPEZ

    2007-01-01

    @@ Global solar radiation data for sites in Chile are analysed and presented in a form suitable for their use in engineering. A new model for monthly average data is developed to predict monthly average global radiation with acceptable accuracy by using actinographic data due to scarcing of pyranometer data. Use of the new quadratic model is proposed because of its relatively wider spectrum of values for (A)ngstrom coefficients a0, a1,and a2.

  3. PREDICTIONS OF WAVE INDUCED SHIP MOTIONS AND LOADS BY LARGE-SCALE MODEL MEASUREMENT AT SEA AND NUMERICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Jialong Jiao

    2016-06-01

    Full Text Available In order to accurately predict wave induced motion and load responses of ships, a new experimental methodology is proposed. The new method includes conducting tests with large-scale models under natural environment conditions. The testing technique for large-scale model measurement proposed is quite applicable and general to a wide range of standard hydrodynamics experiments in naval architecture. In this study, a large-scale segmented self-propelling model allowed for investigating seakeeping performance and wave load behaviour as well as the testing systems were designed and experiments performed. A 2-hour voyage trial of the large-scale model aimed to perform a series of simulation exercises was carried out at Huludao harbour in October 2014. During the voyage, onboard systems, operated by crew, were used to measure and record the sea waves and the model responses. The post-voyage analysis of the measurements, both of the sea waves and the model’s responses, were made to predict the ship’s motion and load responses of short-term under the corresponding sea state. Furthermore, numerical analysis of short-term prediction was made by an in-house code and the result was compared with the experiment data. The long-term extreme prediction of motions and loads was also carried out based on the numerical results of short-term prediction.

  4. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2006-11-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited.  Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions.  Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied.  The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was composition dependent. For more "realistic" higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus, it would appear that in order to model

  5. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive gra

  6. Development of a highway noise prediction model using an Leq20 s measure of basic vehicular noise

    Science.gov (United States)

    Pamanikabud, P.; Tansatcha, M.; Brown, A. L.

    2008-09-01

    The objective of the study reported here was to build a highway traffic noise simulation model for free-flow traffic conditions in Thailand employing a technique utilizing individual vehicular noise modelling based on the equivalent sound level over 20 s ( Leq20 s). This Leq20 s technique provides a more accurate measurement of noise energy from each type of vehicle under real running conditions. The coefficient of propagation and ground effect for this model was then estimated using a trial-and-error method, and applied to the highway traffic noise simulation model. This newly developed highway traffic noise model was tested for its goodness-of-fit to field observations. The test shows that this new model provides good predictions for highway noise conditions in Thailand. The concepts and techniques that are modeled and tested in this study can also be applied for prediction of traffic noise for local conditions in other countries.

  7. Approach to first principles model prediction of measured WIPP (Waste Isolation Pilot Plant) in situ room closure in salt

    Energy Technology Data Exchange (ETDEWEB)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E. (Sandia National Labs., Albuquerque, NM (USA); Southwest Research Inst., San Antonio, TX (USA); RE/SPEC, Inc., Rapid City, SD (USA))

    1989-08-01

    The discrepancies between predicted and measured WIPP in situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. 12 refs., 5 figs., 1 tab.

  8. Approach to first principles model prediction of measured WIPP (Waste Isolation Pilot Plant) in situ room closure in salt

    Energy Technology Data Exchange (ETDEWEB)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E.

    1989-01-01

    The discrepancies between predicted and measured WIPP in situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. 17 refs., 8 figs., 1 tab.

  9. Approach to first principles model prediction of measured WIPP (Waste Isolation Pilot Plant) in-situ room closure in salt

    Energy Technology Data Exchange (ETDEWEB)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E. (Sandia National Labs., Albuquerque, NM (USA))

    1990-01-01

    The discrepancies between predicted and measured Waste Isolation Pilot Plant (WIPP) in-situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. (author).

  10. Modification of ITU-R Rain Fade Slope Prediction Model Based on Satellite Data Measured at High Elevation Angle

    OpenAIRE

    2012-01-01

    Rain fade slope is one of fade dynamics behaviour used by system engineers to design fade mitigation techniques (FMT) for space-earth microwave links. Recent measurements found that fade slope prediction model proposed by ITU-R is unable to predict fade slope distribution accurately in tropical regions. Rain fade measurement was conducted  in Kuala Lumpur (3.3° N, 101.7° E) where located in heavy rain zone by receiving signal at 10.982 GHz (Ku-band) from MEASAT3 (91.5° E) on ...

  11. Model predictions of copper speciation in coastal water compared to measurements by analytical voltammetry.

    Science.gov (United States)

    Ndungu, Kuria

    2012-07-17

    Trace metal toxicity to aquatic biota is highly dependent on the metaĺs chemical speciation. Accordingly, metal speciation is being incorporated in to water quality criteria and toxicity regulations using the Biotic Ligand Model (BLM) but there are currently no BLM for biota in marine and estuarine waters. In this study, I compare copper speciation measurements in a typical coastal water made using Competitive ligand exchange-adsorptive cathodic stripping voltammetry (CLE-ACSV) to model calculations using Visual MINTEQ. Both Visual MINTEQ and BLM use similar programs to model copper interactions with dissolved organic matter-DOM (i.e., the Stockholm Humic Model and WHAM-Windermere Humic Aqueous Model, respectively). The total dissolved (14). The modeled [Cu2+] could be fitted to the experimental values better after the conditional stability constant for copper binding to fulvic acid (FA) complexes in DOM in the SHM was adjusted to account for higher concentration of strong Cu-binding sites in FA.

  12. Measuring the impact of observations on the predictability of the Kuroshio Extension in a shallow-water model

    Science.gov (United States)

    Kramer, W.; van Leeuwen, P.; Pierini, S.; Dijkstra, H. A.

    2010-12-01

    The Kuroshio Extension—the eastward-flowing free jet formed when the warm waters of the Kuroshio separate from the Japanese coast—reveals bimodal behavior. It changes from an elongated, energetic meandering jet into a weaker, unstable jet with a reduced zonal penetration. Prediction of the path of the Kuroshio is very important for local fisheries and hence local economies. Many of its characteristics, e.g. the decadal period and the more stable character of the elongated state, are also observed in a reduced-gravity ocean model of the northern Pacific basin driven by a constant double-gyre wind field. An ensemble model run with an additional stochastic wind forcing typically loses any predictive value after one decadal cycle. Hence, assimilating observations is required to keep following the Kuroshio Extension transitions. In our study we want to determine which observations are most successful in decreasing the uncertainty in an ensemble prediction for the Kuroshio Extension. This requires a method which can handle the non-linear dynamics of the Kuroshio Extension and the related non-Gaussian probability distribution of the prediction. Firstly, we resort to entropy based predictability measures, like the predictive power or the predictive utility. Secondly, a particle filter technique is used to assimilate observations into the ensemble model. The ensemble is constructed such that at each time it samples the climatological variability. Hence, this unweighted ensemble has no predictive power. When an observation becomes available, the particle filter technique adjusts the weight of each ensemble member according to the observation value and the error distribution. The consequent increase of predictive power is a measure for the impact of the observation. As the ensemble itself is not altered by the filter, different sets of observations can be analyzed a posteriori. To test this methodology we have performed an identical-twin experiment. Here, one model

  13. Preliminary comparison of dose measurements on CRRES to NASA model predictions

    Energy Technology Data Exchange (ETDEWEB)

    Gussenhoven, M.S.; Mullen, E.G.; Brautigan, D.H. (Phillips Lab., Geophysics Directorate, Hanscom AFB, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics); Jordan, C. (Radex Inc., Bedford, MA (US)); Hanser, F.; Dichter, B. (Panametrics, Inc., Waltham, MA (United States))

    1991-12-01

    In this paper, measurements of proton and electron dose from the space radiation dosimeter on the CRRES satellite, in a 18.1{degrees}, 350 km by 33000 km orbit, are compared to the NASA models for solar maximum conditions. Up to the time of the large, solar-initiated particle events near the end of March 1991, the results are similar to those previously reported for solar minimum at low altitudes. That is, prior to the March event, there is excellent agreement between model and measured values for protons and poor agreement for electrons. During the event period a second proton belt was formed at higher altitudes which is not contained in the proton models, and the electrons increased over an order of magnitude for the CRRES orbit. This resulted in poorer agreement between model and measured values for protons during and after the solar proton event and better agreement for electrons during the electron enhancement period. What the data show is that, depending on orbit, both the existing proton and electron models can give large errors in dose that can compromise space system performance and lifetime.

  14. Preliminary comparison of dose measurements on crres to NASA model predictions. (Reannouncement with new availability information)

    Energy Technology Data Exchange (ETDEWEB)

    Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H.; Holeman, E.; Jordon, C.

    1991-12-01

    Measurements of proton and electron dose from the space radiation dosimeter on the CRRES satellite, in a 18.1 deg, 350 km by 33000km orbit, are compared to the NASA models for solar maximum conditions. Up to the time of the large, solar-initiated particle events near the end of March 1991, the results are similar to those previously reported for solar minimum at low altitudes. That is, prior to the March event, there is excellent agreement between model and measured values for protons and poor agreement for electrons. During the event period a second proton belt was formed at higher altitudes which is not contained in the proton models, and the electrons increased over an order of magnitude for the CRRES orbit. This resulted in poorer agreement between model and measured values for protons during and after the solar proton event and better agreement for electrons during the electron enhancement period. What the data show is that, depending on orbit, both the existing proton and electron models can give large errors in dose that can compromise space system performance and lifetime.

  15. Joint model of multiple longitudinal measures and a binary outcome: An application to predict orthostatic hypertension for subacute stroke patients.

    Science.gov (United States)

    Hwang, Yi-Ting; Wang, Chun-Chao; Wang, Chiuan He; Tseng, Yi-Kuan; Chang, Yeu-Jhy

    2015-07-01

    Stroke patients with orthostatic hypertensive responses that are one of the blood pressure regulation problems can easily fall down while doing rehabilitation, which may result in prolonged hospitalization and delayed treatment and recovery. This may result in increasing the medical cost and burden. In turn, developing a diagnostic test for the orthostatic hypertension (OH) is clinically important for patients who are suffering from stroke. Clinically, a patient needs to have a tilt testing that requires measuring the change of blood pressures and heart rate at all angles to determine whether a stroke patient has OH. It takes lots of time and effort to perform the test. Assuming there exist measurement errors when obtaining the blood pressures and heart rate at all angles, this paper proposes using multiple mixed-effect models to obtain the true trajectories of these measurements, which take into account the measurement error and the possible correlation among multiple measurements, and a logistic regression uses these true trajectories at a given time and other fixed-effect covariates as predictors to predict the status of OH. The joint likelihood function is derived to estimate parameters and the area under the receiver operating characteristics curve is used to estimate the predictive power of the model. Monte Carlo simulations are performed to evaluate the feasibility of the proposed methods. Also, the proposed model is implemented in the real data and provides an acceptable predictive power. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  17. Modification of ITU-R Rain Fade Slope Prediction Model Based on Satellite Data Measured at High Elevation Angle

    Directory of Open Access Journals (Sweden)

    Hassan Dao

    2012-01-01

    Full Text Available Rain fade slope is one of fade dynamics behaviour used by system engineers to design fade mitigation techniques (FMT for space-earth microwave links. Recent measurements found that fade slope prediction model proposed by ITU-R is unable to predict fade slope distribution accurately in tropical regions. Rain fade measurement was conducted  in Kuala Lumpur (3.3° N, 101.7° E where located in heavy rain zone by receiving signal at 10.982 GHz (Ku-band from MEASAT3 (91.5° E on 77.4° elevation angle. The measurement has been carried out for one year period. Fade slope S parameter on ITU-R prediction model has been investigated. New parameter is proposed for the fade slope prediction modeling based on measured data at high elevation angle, Ku-band. ABSTRAK: Cerun hujan pudar adalah salah satu dinamik tingkah laku pudar yang digunakan oleh jurutera sistem untuk mereka bentuk teknik-teknik pengurangan pudar (FMT bagi link gelombang mikro ruang bumi. Ukuran baru-baru ini mendapati bahawa cerun pudar ramalan model yang dicadangkan oleh ITU-R tidak mampu untuk meramalkan pengagihan cerun pudar tepat di kawasan tropika. Pengukuran  hujan pudar telah dijalankan di Kuala Lumpur (3.3° N, 101.7° E yang terletak di kawasan hujan lebat dengan menerima isyarat pada 10,982 GHz (Ku-band dari MEASAT3 (91.5° E pada sudut ketinggian 77.4°. Pengukuran telah dijalankan untuk tempoh satu tahun. Parameter cerun pudar S pada model ramalan ITU-R telah disiasat. Parameter baru adalah dicadangkan untuk pemodelan cerun pudar ramalan berdasarkan data yang diukur pada sudut paras ketinggian, Ku-band.KEYWORDS: fade slope; ITU-R; fade mitigation techniques; sampling time interval

  18. Evaluating the road safety effects of a fuel cost increase measure by means of zonal crash prediction modeling.

    Science.gov (United States)

    Pirdavani, Ali; Brijs, Tom; Bellemans, Tom; Kochan, Bruno; Wets, Geert

    2013-01-01

    Travel demand management (TDM) consists of a variety of policy measures that affect the transportation system's effectiveness by changing travel behavior. The primary objective to implement such TDM strategies is not to improve traffic safety, although their impact on traffic safety should not be neglected. The main purpose of this study is to evaluate the traffic safety impact of conducting a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) in Flanders, Belgium. Since TDM strategies are usually conducted at an aggregate level, crash prediction models (CPMs) should also be developed at a geographically aggregated level. Therefore zonal crash prediction models (ZCPMs) are considered to present the association between observed crashes in each zone and a set of predictor variables. To this end, an activity-based transportation model framework is applied to produce exposure metrics which will be used in prediction models. This allows us to conduct a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models unlike traditional models in which the impact of TDM strategies are assumed. The crash data used in this study consist of fatal and injury crashes observed between 2004 and 2007. The network and socio-demographic variables are also collected from other sources. In this study, different ZCPMs are developed to predict the number of injury crashes (NOCs) (disaggregated by different severity levels and crash types) for both the null and the fuel-cost increase scenario. The results show a considerable traffic safety benefit of conducting the fuel-cost increase scenario apart from its impact on the reduction of the total vehicle kilometers traveled (VKT). A 20% increase in fuel price is predicted to reduce the annual VKT by 5.02 billion (11.57% of the total annual VKT in Flanders), which causes the total NOCs to decline by 2.83%.

  19. Merging Field Measurements and High Resolution Modeling to Predict Possible Societal Impacts of Permafrost Degradation

    Science.gov (United States)

    Romanovsky, V. E.; Nicolsky, D.; Marchenko, S. S.; Cable, W.; Panda, S. K.

    2015-12-01

    A general warming trend in permafrost temperatures has triggered permafrost degradation in Alaska, especially at locations influenced by human activities. Various phenomena related to permafrost degradation are already commonly observed, including increased rates of coastal and riverbank erosion, increased occurrences of retrogressive thaw slumps and active layer detachment slides, and the disappearance of tundra lakes. The combination of thawing permafrost and erosion is damaging local community infrastructure such as buildings, roads, airports, pipelines, water and sanitation facilities, and communication systems. The potential scale of direct ecological and economical damage due to degrading permafrost has just begun to be recognized. While the projected changes in permafrost are generally available on global and regional scales, these projections cannot be effectively employed to estimate the societal impacts because of their coarse resolution. Intrinsic problems with the classical "spatial grid" approach in spatially distributed modeling applications preclude the use of this modeling approach to solve the above stated problem. Two types of models can be used to study permafrost dynamics in this case. One approach is a site-specific application of the GIPL2.0 permafrost model and another is a very high (tens to hundred meter) resolution spatially distributed version of the same model. The results of properly organized field measurements are also needed to calibrate and validate these models for specific locations and areas of interest. We are currently developing a "landscape unit" approach that allows practically unlimited spatial resolution of the modeling products. Classification of the study area into particular "landscape units" should be performed in accordance with the main factors controlling the expression of climate on permafrost in the study area, typically things such as vegetation, hydrology, soil properties, topography, etc. In areas with little

  20. A Comparison of Model-Scale Experimental Measurements and Computational Predictions for a Large Transom-Stern Wave

    CERN Document Server

    Drazen, David A; Fu, Thomas C; Beale, Kristine L C; O'Shea, Thomas T; Brucker, Kyle A; Dommermuth, Douglas G; Wyatt, Donald C; Bhushan, Shanti; Carrica, Pablo M; Stern, Fred

    2014-01-01

    The flow field generated by a transom stern hull form is a complex, broad-banded, three-dimensional system marked by a large breaking wave. This unsteady multiphase turbulent flow feature is difficult to study experimentally and simulate numerically. Recent model-scale experimental measurements and numerical predictions of the wave-elevation topology behind a transom-sterned hull form, Model 5673, are compared and assessed in this paper. The mean height, surface roughness (RMS), and spectra of the breaking stern-waves were measured by Light Detection And Ranging (LiDAR) and Quantitative Visualization (QViz) sensors over a range of model speeds covering both wet- and dry-transom operating conditions. Numerical predictions for this data set from two Office of Naval Research (ONR) supported naval-design codes, Numerical Flow Analysis (NFA) and CFDship-Iowa-V.4, have been performed. Comparisons of experimental data, including LiDAR and QViz measurements, to the numerical predictions for wet-transom and dry transo...

  1. Development of Neural Network Model for Predicting Peak Ground Acceleration Based on Microtremor Measurement and Soil Boring Test Data

    Directory of Open Access Journals (Sweden)

    T. Kerh

    2012-01-01

    Full Text Available It may not be possible to collect adequate records of strong ground motions in a short period of time; hence microtremor survey is frequently conducted to reveal the stratum structure and earthquake characteristics at a specified construction site. This paper is therefore aimed at developing a neural network model, based on available microtremor measurement and on-site soil boring test data, for predicting peak ground acceleration at a site, in a science park of Taiwan. The four key parameters used as inputs for the model are soil values of the standard penetration test, the medium grain size, the safety factor against liquefaction, and the distance between soil depth and measuring station. The results show that a neural network model with four neurons in the hidden layer can achieve better performance than other models presently available. Also, a weight-based neural network model is developed to provide reliable prediction of peak ground acceleration at an unmeasured site based on data at three nearby measuring stations. The method employed in this paper provides a new way to treat this type of seismic-related problem, and it may be applicable to other areas of interest around the world.

  2. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel CO2

    Science.gov (United States)

    Coakley, Kevin J.; Miller, John B.; Montzka, Stephen A.; Sweeney, Colm; Miller, Ben R.

    2016-06-01

    The measured 14C:12C isotopic ratio of atmospheric CO2 (and its associated derived Δ14C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff). Given enough such measurements, independent top-down estimation of U.S. fossil fuel CO2 emissions should be possible. However, the number of Δ14C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ14C is therefore measured in just a small fraction of samples obtained by flask air sampling networks around the world. Here we develop a projection pursuit regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/Earth System Research Laboratory (ESRL) Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halocarbon and hydrocarbon acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 to 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ14C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of

  3. Surrogate gas prediction model as a proxy for Δ(14)C-based measurements of fossil fuel-CO2.

    Science.gov (United States)

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-06-27

    The measured (14)C:(12)C isotopic ratio of atmospheric CO2 (and its associated derived Δ(14)C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff ). Given enough such measurements, independent top-down estimation of US fossil fuel-CO2 emissions should be possible. However, the number of Δ(14)C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ(14)C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ(14)C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and

  4. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  5. Combined Microwave and Sferics Measurements as a Continuous Proxy for Latent Heating in Mesoscale Model Predictions

    Science.gov (United States)

    Chang, D. -E.; Morales, C. A.; Weinman, J. A.; Olson, W. S.

    1999-01-01

    Planar rainfall distributions were retrieved from data provided by the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and Special Sensor Microwave Imager (SSM/I) radiometers. Lightning generates Very Low Frequency (VLF) radio noise pulses called sferics. Those pulses propagate over large distances so that they can be continuously monitored with a network of ground based radio receivers. An empirical relationship between the sferics rate and the convective rainfall permitted maps of convective latent heating profiles to be derived continuously from the sferics distributions. Those inferred latent heating rates were assimilated into the Penn State/NCAR Mesoscale Model (MM5) that depicted an intense winter cyclone that passed over Florida on 2 February 1998. When compared to a 14 hour MM5 rainfall forecast using conventional data, the use of lightning data improved the forecast.

  6. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    Energy Technology Data Exchange (ETDEWEB)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  7. Sagittal range of motion of the thoracic spine using inertial tracking device and effect of measurement errors on model predictions.

    Science.gov (United States)

    Hajibozorgi, M; Arjmand, N

    2016-04-11

    Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion.

  8. Soil and Nitrogen redistribution in a small Mediterranean cereal field: modelling predictions and field measurements

    Science.gov (United States)

    López-Vicente, Manuel, , Dr.; Quijano, M. Sc. Laura; Gaspar, Leticia, , Dr.; Palazón, M. Sc. Leticia; Navas, Ana, , Dr.

    2015-04-01

    after oxygen combustion at 950 °C by a thermal conductivity detector. The average and maximum values of soil nitrogen were 0.11% and 0.37%, respectively. We run the GIS-based SERT-2014 SAGA v1.0 model of soil erosion (more details in DOI:10.1002/hyp.10370). All input maps were generated at 1x1 metre of cell size allowing sound parameterization. Simulation was run at monthly scale with average climatic values. Results of simulated soil erosion, net soil loss and deposition were used to generate the map of soil redistribution. The correlation between the values of soil redistribution and those of soil nitrogen was done at each sampling point. The average annual sediment budget was calculated and the predicted value was analysed in the context of the total nitrogen budget.

  9. A mechanical model for predicting the probability of osteoporotic hip fractures based in DXA measurements and finite element simulation

    Directory of Open Access Journals (Sweden)

    López Enrique

    2012-11-01

    Full Text Available Abstract Background Osteoporotic hip fractures represent major cause of disability, loss of quality of life and even mortality among the elderly population. Decisions on drug therapy are based on the assessment of risk factors for fracture, from BMD measurements. The combination of biomechanical models with clinical studies could better estimate bone strength and supporting the specialists in their decision. Methods A model to assess the probability of fracture, based on the Damage and Fracture Mechanics has been developed, evaluating the mechanical magnitudes involved in the fracture process from clinical BMD measurements. The model is intended for simulating the degenerative process in the skeleton, with the consequent lost of bone mass and hence the decrease of its mechanical resistance which enables the fracture due to different traumatisms. Clinical studies were chosen, both in non-treatment conditions and receiving drug therapy, and fitted to specific patients according their actual BMD measures. The predictive model is applied in a FE simulation of the proximal femur. The fracture zone would be determined according loading scenario (sideway fall, impact, accidental loads, etc., using the mechanical properties of bone obtained from the evolutionary model corresponding to the considered time. Results BMD evolution in untreated patients and in those under different treatments was analyzed. Evolutionary curves of fracture probability were obtained from the evolution of mechanical damage. The evolutionary curve of the untreated group of patients presented a marked increase of the fracture probability, while the curves of patients under drug treatment showed variable decreased risks, depending on the therapy type. Conclusion The FE model allowed to obtain detailed maps of damage and fracture probability, identifying high-risk local zones at femoral neck and intertrochanteric and subtrochanteric areas, which are the typical locations of

  10. Model measurement based identification of Francis turbine vortex rope parameters for prototype part load pressure and power pulsation prediction

    Science.gov (United States)

    Manderla, M.; Weber, W.; Koutnik, J.

    2016-11-01

    Pressure and power fluctuations of hydro-electric power plants in part-load operation are an important measure for the quality of the power which is delivered to the electrical grid. It is well known that the unsteadiness is driven by the flow patterns in the draft tube where a vortex rope is present. However, until today the equivalent vortex rope parameters for common numerical 1D-models are a major source of uncertainty. In this work, a new optimization-based grey box method for experimental vortex rope modelling and parameter identification is presented. The combination of analytical vortex rope and test rig modelling and the usage of dynamic measurements allow the identification of the unknown vortex rope parameters. Upscaling from model to prototype size is achieved via existing nondimensional parameters. In this work, a new experimental setup and system identification method is proposed which are suitable for the determination of the full set of part load vortex rope parameters in the lab. For the vortex rope, a symmetric model with cavity compliance, bulk viscosity and two pressure excitation sources is developed and implemented which shows the best correspondence with available measurement data. Due to the non-dimensional parameter definition, scaling is possible. This finally provides a complete method for the prediction of prototype part-load pressure and power oscillations. Since the proposed method is based on a simple limited control domain, limited modelling effort and also small modelling uncertainties are some major advantages. Due to the generality of the approach, a future application to other operating conditions such as full load will be straightforward.

  11. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  12. Molybdate transport in a chemically complex aquifer: Field measurements compared with solute-transport model predictions

    Science.gov (United States)

    Stollenwerk, K.G.

    1998-01-01

    A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.

  13. Comparison of high pressure transient PVT measurements and model predictions. Part I.

    Energy Technology Data Exchange (ETDEWEB)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Evans, Gregory Herbert; Rice, Steven F.; Winters, William Stanley, Jr.

    2010-07-01

    A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must be confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.

  14. Characterization of downwelling radiance measured from a ground-based microwave radiometer using numerical weather prediction model data

    Science.gov (United States)

    Ahn, M.-H.; Won, H. Y.; Han, D.; Kim, Y.-H.; Ha, J.-C.

    2016-01-01

    The ground-based microwave sounding radiometers installed at nine weather stations of Korea Meteorological Administration alongside with the wind profilers have been operating for more than 4 years. Here we apply a process to assess the characteristics of the observation data by comparing the measured brightness temperature (Tb) with reference data. For the current study, the reference data are prepared by the radiative transfer simulation with the temperature and humidity profiles from the numerical weather prediction model instead of the conventional radiosonde data. Based on the 3 years of data, from 2010 to 2012, we were able to characterize the effects of the absolute calibration on the quality of the measured Tb. We also showed that when clouds are present the comparison with the model has a high variability due to presence of cloud liquid water therefore making cloudy data not suitable for assessment of the radiometer's performance. Finally we showed that differences between modeled and measured brightness temperatures are unlikely due to a shift in the selection of the center frequency but more likely due to spectroscopy issues in the wings of the 60 GHz absorption band. With a proper consideration of data affected by these two effects, it is shown that there is an excellent agreement between the measured and simulated Tb. The regression coefficients are better than 0.97 along with the bias value of better than 1.0 K except for the 52.28 GHz channel which shows a rather large bias and variability of -2.6 and 1.8 K, respectively.

  15. Predicting responses from Rasch measures.

    Science.gov (United States)

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  16. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  17. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  18. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  19. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  20. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  1. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  2. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  3. Comparison of linear and non-linear blade model predictions in Bladed to measurement data from GE 6MW wind turbine

    Science.gov (United States)

    Collier, W.; Milian Sanz, J.

    2016-09-01

    The length and flexibility of wind turbine blades are increasing over time. Typically, the dynamic response of the blades is analysed using linear models of blade deflection, enhanced by various ad-hoc non-linear correction models. For blades undergoing large deflections, the small deflection assumption inherent to linear models becomes less valid. It has previously been demonstrated that linear and nonlinear blade models can show significantly different blade response, particularly for blade torsional deflection, leading to load prediction differences. There is a need to evaluate how load predictions from these two approaches compare to measurement data from the field. In this paper, time domain simulations in turbulent wind are carried out using the aero-elastic code Bladed with linear and non-linear blade deflection models. The turbine blade load and deflection simulation results are compared to measurement data from an onshore prototype of the GE 6MW Haliade turbine, which features 73.5m long LM blades. Both linear and non-linear blade models show a good match to measurement turbine load and blade deflections. Only the blade loads differ significantly between the two models, with other turbine loads not strongly affected. The non-linear blade model gives a better match to the measured blade root flapwise damage equivalent load, suggesting that the flapwise dynamic behaviour is better captured by the non-linear blade model. Conversely, the linear blade model shows a better match to measurements in some areas such as blade edgewise damage equivalent load.

  4. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  5. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  6. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  7. Validation of three new measure-correlate-predict models\\ud for the long-term prospection of the wind resource

    OpenAIRE

    Romo, Alejandro; Amezcua, Javier; Probst, Oliver

    2011-01-01

    The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective\\ud ...

  8. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  9. ISOL Yield Predictions from Holdup-Time Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Spejewski, Eugene H. [Oak Ridge Associated Universities (ORAU); Carter, H Kennon [Oak Ridge Associated Universities (ORAU); Mervin, Brenden T. [Oak Ridge Associated Universities (ORAU); Prettyman, Emily S. [Oak Ridge Associated Universities (ORAU); Kronenberg, Andreas [Oak Ridge Associated Universities (ORAU); Stracener, Daniel W [ORNL

    2008-01-01

    A formalism based on a simple model is derived to predict ISOL yields for all isotopes of a given element based on a holdup-time measurement of a single isotope of that element. Model predictions, based on parameters obtained from holdup-time measurements, are compared to independently-measured experimental values.

  10. Development of a statistical model for predicting the ethanol content of blood from measurements on saliva or breath samples.

    Science.gov (United States)

    Ruz, J; Linares, P; Luque de Castro, M D; Caridad, J M; Valcarcel, M

    1989-01-01

    Blood, saliva and breath samples from a population of males and females subjected to the intake of preselected amounts of ethanol, whilst in different physical conditions (at rest, after physical exertion, on an empty stomach and after eating), were analysed by automatic methods employing immobilized (blood) or dissolved (saliva) enzymes and a breathanalyser. Treatment of the results obtained enabled the development of a statistical model for prediction of the ethanol concentration in blood at a given time from the ethanol concentration in saliva or breath obtained at a later time.

  11. Monte carlo simulation of base and nucleotide excision repair of clustered DNA damage sites. II. Comparisons of model predictions to measured data.

    Science.gov (United States)

    Semenenko, V A; Stewart, R D

    2005-08-01

    Clustered damage sites other than double-strand breaks (DSBs) have the potential to contribute to deleterious effects of ionizing radiation, such as cell killing and mutagenesis. In the companion article (Semenenko et al., Radiat. Res. 164, 180-193, 2005), a general Monte Carlo framework to simulate key steps in the base and nucleotide excision repair of DNA damage other than DSBs is proposed. In this article, model predictions are compared to measured data for selected low-and high-LET radiations. The Monte Carlo model reproduces experimental observations for the formation of enzymatic DSBs in Escherichia coli and cells of two Chinese hamster cell lines (V79 and xrs5). Comparisons of model predictions with experimental values for low-LET radiation suggest that an inhibition of DNA backbone incision at the sites of base damage by opposing strand breaks is active over longer distances between the damaged base and the strand break in hamster cells (8 bp) compared to E. coli (3 bp). Model estimates for the induction of point mutations in the human hypoxanthine guanine phosphoribosyl transferase (HPRT) gene by ionizing radiation are of the same order of magnitude as the measured mutation frequencies. Trends in the mutation frequency for low- and high-LET radiation are predicted correctly by the model. The agreement between selected experimental data sets and simulation results provides some confidence in postulated mechanisms for excision repair of DNA damage other than DSBs and suggests that the proposed Monte Carlo scheme is useful for predicting repair outcomes.

  12. MRI blood-brain barrier permeability measurements to predict hemorrhagic transformation in a rat model of ischemic stroke.

    Science.gov (United States)

    Hoffmann, Angelika; Bredno, Jörg; Wendland, Michael F; Derugin, Nikita; Hom, Jason; Schuster, Tibor; Zimmer, Claus; Su, Hua; Ohara, Peter T; Young, William L; Wintermark, Max

    2012-12-01

    Permeability imaging might add valuable information in the risk assessment of hemorrhagic transformation. This study evaluates the predictive value of blood-brain barrier permeability (BBBP) measurements extracted from dynamic contrast-enhanced MRI for hemorrhagic transformation in ischemic stroke. Spontaneously hypertensive and Wistar rats with 2 h filament occlusion of the right MCA underwent MRI during occlusion, at 4 and 24 h post reperfusion. BBBP was imaged by DCE imaging and quantified by Patlak analysis. Cresyl-violet staining was used to characterize hemorrhage in sacrificed rats at 24 h, immediately following the last imaging study. BBBP changes were evaluated at baseline, 4 and 24 h after reperfusion. Receiver-operating characteristic (ROC) analysis was performed to determine the most accurate BBBP threshold to predict hemorrhagic transformation. In animals showing macroscopic hemorrhage at 24 h, 95th BBBP percentile values ipsilateral were 0.323 [0.260, 0.387], 0.685 [0.385, 0.985], and 0.412 [0.210, 0.613] ml/min·100 g (marginal mean [95%CI]) during occlusion, at 4 and 24 h post reperfusion, respectively. The BBBP values on the infarcted and contralateral side were significantly different at 4 (p = 0.034) and 24 h post reperfusion (p = 0.031). The predictive value of BBBP in terms of macroscopic hemorrhage was highest 4 h after reperfusion (ROC area under the curve = 84 %) with a high negative predictive value (98.3 %) and limited positive predictive value (14.9 %) for a threshold of 0.35 ml/min·100g. Altered BBBP is a necessary but not sufficient condition to cause hemorrhagic transformation in rats with an infarct. Further research is needed to identify those additional risk factors that are required for hemorrhagic transformation to develop in the setting of ischemic stroke.

  13. Comparison of Measured Rain Attenuation in the 12.25 GHz Band with Predictions by the ITU-R Model

    Directory of Open Access Journals (Sweden)

    Dong You Choi

    2012-01-01

    Full Text Available Quantitative analysis and prediction of radio attenuation is necessary in order to improve the reliability of satellite-earth communication links and for economically efficient design. For this reason, many countries have made efforts to develop their own rain attenuation prediction models that are suited to their rain environment. In this paper, we present the results of measurements of rain-induced attenuation in vertically polarized signals propagating at 12.25 GHz during certain rain events, which occurred in the rainy wet season of 2001 and 2007 at Yong-in, Korea. The rain attenuation over the link path was measured experimentally and compared with the attenuation obtained using the ITU-R model.

  14. SUPPORT VECTOR MACHINE METHOD FOR PREDICTING INVESTMENT MEASURES

    Directory of Open Access Journals (Sweden)

    Olga V. Kitova

    2016-01-01

    Full Text Available Possibilities of applying intelligent machine learning technique based on support vectors for predicting investment measures are considered in the article. The base features of support vector method over traditional econometric techniques for improving the forecast quality are described. Computer modeling results in terms of tuning support vector machine models developed with programming language Python for predicting some investment measures are shown.

  15. Tropospheric scintillation prediction models for a high elevation angle based on measured data from a tropical region

    Science.gov (United States)

    Abdul Rahim, Nadirah Binti; Islam, Md. Rafiqul; J. S., Mandeep; Dao, Hassan; Bashir, Saad Osman

    2013-12-01

    The recent rapid evolution of new satellite services, including VSAT for internet access, LAN interconnection and multimedia applications, has triggered an increasing demand for bandwidth usage by satellite communications. However, these systems are susceptible to propagation effects that become significant as the frequency increases. Scintillation is the rapid signal fluctuation of the amplitude and phase of a radio wave, which is significant in tropical climates. This paper presents the analysis of the tropospheric scintillation data for satellite to Earth links at the Ku-band. Twelve months of data (January-December 2011) were collected and analyzed to evaluate the effect of tropospheric scintillation. Statistics were then further analyzed to inspect the seasonal, worst-month, diurnal and rain-induced scintillation effects. By employing the measured scintillation data, a modification of the Karasawa model for scintillation fades and enhancements is proposed based on data measured in Malaysia.

  16. Measuring the impact of observations on the predictability of the Kuroshio Extension in a shallow-water model

    NARCIS (Netherlands)

    Kramer, W.; Dijkstra, H.A.; Pierini, S.; van Leeuwen, P.J.

    2012-01-01

    In this paper, sequential importance sampling is used to assess the impact of observations on an ensemble prediction for the decadal path transitions of the Kuroshio Extension. This particle-filtering approach gives access to the probability density of the state vector, which allows the predictive p

  17. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  18. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  19. A jet engine noise measurement and prediction tool.

    Science.gov (United States)

    Frendi, Abdelkader; Dorland, Wade D; Maung, Thein; Nesman, Tom; Wang, Ten-See

    2002-11-01

    In this paper, the authors describe an innovative jet engine noise measurement and prediction tool. The tool measures sound-pressure levels and frequency spectra in the far field. In addition, the tool provides predicted results while the measurements are being made. The predictions are based on an existing computational fluid dynamics database coupled to an empirical acoustic radiation model based on the far-field approximation to the Lighthill acoustic analogy. Preliminary tests of this acoustic measurement and prediction tool produced very encouraging results.

  20. Measuring and predicting heterogeneous recessions

    NARCIS (Netherlands)

    C. Cakmakli; R. Paap; D. van Dijk

    2011-01-01

    This paper conducts an empirical analysis of the heterogeneity of recessions in monthly U.S. coincident and leading indicator variables. Univariate Markovswitching models indicate that it is appropriate to allow for two distinct recession regimes, corresponding with ‘mild’ and ‘severe’ recessions. A

  1. Micrometeorological measurement of hexachlorobenzene and polychlorinated biphenyl compound air-water gas exchange in Lake Superior and comparison to model predictions

    Directory of Open Access Journals (Sweden)

    M. D. Rowe

    2012-05-01

    Full Text Available Air-water exchange fluxes of persistent, bioaccumulative and toxic (PBT substances are frequently estimated using the Whitman two-film (W2F method, but micrometeorological flux measurements of these compounds over water are rarely attempted. We measured air-water exchange fluxes of hexachlorobenzene (HCB and polychlorinated biphenyls (PCBs on 14 July 2006 in Lake Superior using the modified Bowen ratio (MBR method. Measured fluxes were compared to estimates using the W2F method, and to estimates from an Internal Boundary Layer Transport and Exchange (IBLTE model that implements the NOAA COARE bulk flux algorithm and gas transfer model. We reveal an inaccuracy in the estimate of water vapor transfer velocity that is commonly used with the W2F method for PBT flux estimation, and demonstrate the effect of use of an improved estimation method. Flux measurements were conducted at three stations with increasing fetch in offshore flow (15, 30, and 60 km in southeastern Lake Superior. This sampling strategy enabled comparison of measured and predicted flux, as well as modification in near-surface atmospheric concentration with fetch, using the IBLTE model. Fluxes estimated using the W2F model were compared to fluxes measured by MBR. In five of seven cases in which the MBR flux was significantly greater than zero, concentration increased with fetch at 1-m height, which is qualitatively consistent with the measured volatilization flux. As far as we are aware, these are the first reported ship-based micrometeorological air-water exchange flux measurements of PCBs.

  2. Micrometeorological measurement of hexachlorobenzene and polychlorinated biphenyl compound air-water gas exchange in Lake Superior and comparison to model predictions

    Directory of Open Access Journals (Sweden)

    M. D. Rowe

    2012-01-01

    Full Text Available Air-water exchange fluxes of persistent, bioaccumulative and toxic (PBT substances are frequently estimated using the Whitman two-film (W2F method, but micrometeorological flux measurements of these compounds over water are rarely attempted. We measured air-water exchange fluxes of hexachlorobenzene (HCB and polychlorinated biphenyls (PCBs on 14 July 2006 in Lake Superior using the modified Bowen ratio (MBR method. Measured fluxes were compared to estimates using the W2F method, and to estimates from an Internal Boundary Layer Transport and Exchange (IBLTE model that implements the NOAA COARE bulk flux algorithm and gas transfer model. We reveal an inaccuracy in the estimate of water vapor transfer velocity that is commonly used with the W2F method for PBT flux estimation, and demonstrate the effect of use of an improved estimation method. Flux measurements were conducted at three stations with increasing fetch in offshore flow (15, 30, and 60 km in southeastern Lake Superior. This sampling strategy enabled comparison of measured and predicted flux, as well as modification in near-surface atmospheric concentration with fetch, using the IBLTE model. Fluxes estimated using the W2F model were compared to fluxes measured by MBR. In five of seven cases in which the MBR flux was significantly greater than zero, concentration increased with fetch at 1-m height, which is qualitatively consistent with the measured volatilization flux. As far as we are aware, these are the first reported micrometeorological air-water exchange flux measurements of PCBs.

  3. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  4. A comparison of forest height prediction from FIA field measurement and LiDAR data via spatial models

    Science.gov (United States)

    Yuzhen Li

    2009-01-01

    Previous studies have shown a high correspondence between tree height measurements acquired from airborne LiDAR and that those measured using conventional field techniques. Though these results are very promising, most of the studies were conducted over small experimental areas and tree height was measured carefully or using expensive instruments in the field, which is...

  5. Prediction of harmful water quality parameters combining weather, air quality and ecosystem models with in situ measurement

    Science.gov (United States)

    The ability to predict water quality in lakes is important since lakes are sources of water for agriculture, drinking, and recreational uses. Lakes are also home to a dynamic ecosystem of lacustrine wetlands and deep waters. They are sensitive to pH changes and are dependent on d...

  6. 14NH_3 Line Positions and Intensities in the Far-Infrared Comparison of Ft-Ir Measurements to Empirical Hamiltonian Model Predictions

    Science.gov (United States)

    Sung, Keeyoon; Yu, Shanshan; Pearson, John; Pirali, Olivier; Kwabia Tchana, F.; Manceron, Laurent

    2016-06-01

    We have analyzed multiple spectra of high purity (99.5%) normal ammonia sample recorded at room temperatures using the FT-IR and AILES beamline at Synchrotron SOLEIL, France. More than 2830 line positions and intensities are measured for the inversion-rotation and rovibrational transitions in the 50 - 660 wn region. Quantum assignments were made for 2047 transitions from eight bands including four inversion-rotation bands (gs(a-s), νb{2}(a-s), 2νb{2}(a-s), and νb{4}(a-s)) and four ro-vibrational bands (νb{2} - gs, 2νb{2} - gs, νb{4} - νb{2}, and 2νb{2} -νb{4}), as well as covering more than 300 lines of ΔK = 3 forbidden transitions. Out of the eight bands, we note that 2νb{2} - νb{4} has not been listed in the HITRAN 2012 database. The measured line positions for the assigned transitions are in an excellent agreement (typically better than 0.001 wn) with the predictions from the empirical Hamiltonian model [S. Yu, J.C. Pearson, B.J. Drouin, et al.(2010)] in a wide range of J and K for all the eight bands. The comparison with the HITRAN 2012 database is also satisfactory, although systematic offsets are seen for transitions with high J and K and those from weak bands. However, differences of 20% or so are seen in line intensities for allowed transitions between the measurements and the model predictions, depending on the bands. We have also noticed that most of the intensity outliers in the Hamiltonian model predictions belong to transitions from gs(a-s) band. We present the final results of the FT-IR measurements of line positions and intensities, and their comparisons to the model predictions and the HITRAN 2012 database. Research described in this paper was performed at the Jet Propulsion Laboratory and California Institute of Technology, under contracts and cooperative agreements with the National Aeronautics and Space Administration.

  7. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  8. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  9. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Predicted serum folate concentrations based on in vitro studies and kinetic modeling are consistent with measured folate concentrations in humans

    NARCIS (Netherlands)

    Verwei, M.; Freidig, A.P.; Havenaar, R.; Groten, J.P.

    2006-01-01

    The nutritional quality of new functional or fortified food products depends on the bioavailability of the nutrient(s) in the human body. Bioavailability is often determined in human intervention studies by measurements of plasma or serum profiles over a certain time period. These studies are time a

  11. Multiple sclerosis: integration of modeling with biology, clinical and imaging measures to provide better monitoring of disease progression and prediction of outcome.

    Science.gov (United States)

    Goodwin, Shikha Jain

    2016-12-01

    Multiple Sclerosis (MS) is a major cause of neurological disability in adults and has an annual cost of approximately $28 billion in the United States. MS is a very complex disorder as demyelination can happen in a variety of locations throughout the brain; therefore, this disease is never the same in two patients making it very hard to predict disease progression. A modeling approach which combines clinical, biological and imaging measures to help treat and fight this disorder is needed. In this paper, I will outline MS as a very heterogeneous disorder, review some potential solutions from the literature, demonstrate the need for a biomarker and will discuss how computational modeling combined with biological, clinical and imaging data can help link disparate observations and decipher complex mechanisms whose solutions are not amenable to simple reductionism.

  12. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  13. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  14. Measurement of Setschenow constants for six hydrophobic compounds in simulated brines and use in predictive modeling for oil and gas systems.

    Science.gov (United States)

    Burant, Aniela; Lowry, Gregory V; Karamalidis, Athanasios K

    2016-02-01

    Treatment and reuse of brines, produced from energy extraction activities, requires aqueous solubility data for organic compounds in saline solutions. The presence of salts decreases the aqueous solubility of organic compounds (i.e. salting-out effect) and can be modeled using the Setschenow Equation, the validity of which has not been assessed in high salt concentrations. In this study, we used solid-phase microextraction to determine Setschenow constants for selected organic compounds in aqueous solutions up to 2-5 M NaCl, 1.5-2 M CaCl2, and in Na-Ca binary electrolyte solutions to assess additivity of the constants. These compounds exhibited log-linear behavior up to these high NaCl concentrations. Log-linear decreases in solubility with increasing salt concentration were observed up to 1.5-2 M CaCl2 for all compounds, and added to a sparse database of CaCl2 Setschenow constants. Setschenow constants were additive in binary electrolyte mixtures. New models to predict CaCl2 and KCl Setschenow constants from NaCl Setschenow constants were developed, which successfully predicted the solubility of the compounds measured in this study. Overall, data show that the Setschenow Equation is valid for a wide range of salinity conditions typically found in energy-related technologies.

  15. A novel modelling and experimental technique to predict and measure tissue temperature during CO2 laser stimuli for human pain studies.

    Science.gov (United States)

    Al-Saadi, Mohammed Hamed; Nadeau, V; Dickinson, M R

    2006-07-01

    Laser nerve stimulation is now accepted as one of the preferred methods for applying painful stimuli to human skin during pain studies. One of the main concerns, however, is thermal damage to the skin. We present recent work based on using a CO2 laser with a remote infrared (IR) temperature sensor as a feedback system. A model for predicting the subcutaneous skin temperature derived from the signal from the IR detector allows us to accurately predict the laser parameters, thus maintaining an optimum pain stimulus whilst avoiding dangerous temperature levels, which could result in thermal damage. Another aim is to relate the modelling of the CO2 fibre laser interaction to the pain response and compare these results with practical measurements of the pain threshold for various stimulus parameters. The system will also allow us to maintain a constant skin temperature during the stimulus. Another aim of the experiments underway is to review the psychophysics for pain in human subjects, permitting an investigation of the relationship between temperature and perceived pain.

  16. The prediction of zenith range refraction from surface measurements of meteorological parameters. [mathematical models of atmospheric refraction used to improve spacecraft tracking space navigation

    Science.gov (United States)

    Berman, A. L.

    1976-01-01

    In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.

  17. Examination for Predicting Consolidation Settlement by Measurement Records

    Science.gov (United States)

    Kanayama, Motohei; Yamashita, Hiroki; Higashi, Takahiro; Ohtsubo, Masami

    The earthfill structure such as embankments, which are constructed for the preservation of the agricultural land, has shown large settlement in the middle of the construction and after construction in the lowland area on the coast of the Ariake Sea, and then the long term settlement of those buildings is measured. The hyperbolic method is one of the most famous methods that predicting the settlement by using measurement records and is used extensively both domestically and internationally. In this paper the neural network model for predicting settlement by using measurement records in early stage is examined. Using the learning pattern that focused on the convergence of settlement rate, the prediction values are good agreement with the measured values. As a result, having the model learn the data that has a suitable regularity, the proposed method can provide the early prediction of settlement with high accuracy.

  18. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  19. Modeling typical performance measures

    NARCIS (Netherlands)

    Weekers, Anke Martine

    2009-01-01

    In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl

  20. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  1. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    Science.gov (United States)

    Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    2017-08-01

    In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume

  2. Transient, three-dimensional heat transfer model for the laser assisted machining of silicon nitride: 1. Comparison of predictions with measured surface temperature histories

    Energy Technology Data Exchange (ETDEWEB)

    Rozzi, J.C.; Pfefferkorn, F.E.; Shin, Y.C. [Purdue University, (United States). Laser Assisted Materials Processing Laboratory, School of Mechanical Engineering; Incropera, F.P. [University of Notre Dame, (United States). Aerospace and Mechanical Engineering Department

    2000-04-01

    Laser assisted machining (LAM), in which the material is locally heated by an intense laser source prior to material removal, provides an alternative machining process with the potential to yield higher material removal rates, as well as improved control of workpiece properties and geometry, for difficult-to-machine materials such as structural ceramics. To assess the feasibility of the LAM process and to obtain an improved understanding of governing physical phenomena, experiments have been performed to determine the thermal response of a rotating silicon nitride workpiece undergoing heating by a translating CO{sub 2} laser and material removal by a cutting tool. Using a focused laser pyrometer, surface temperature histories were measured to determine the effect of the rotational and translational speeds, the depth of cut, the laser-tool lead distance, and the laser beam diameter and power on thermal conditions. The measurements are in excellent agreement with predictions based on a transient, three-dimensional numerical solution of the heating and material removal processes. The temperature distribution within the unmachined workpiece is most strongly influenced by the laser power and laser-tool lead distance, as well as by the laser/tool translational velocity. A minimum allowable operating temperature in the material removal region corresponds to the YSiAlON glass transition temperature, below which tool fracture may occur. In a companion paper, the numerical model is used to further elucidate thermal conditions associated with laser assisted machining. (author)

  3. Linear and non-linear bias: predictions vs. measurements

    CERN Document Server

    Hoffmann, Kai; Gaztanaga, Enrique

    2016-01-01

    We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Accociating galaxies with dark matter haloes in the MICE Grand Challenge N-body simulation we directly measure the bias parameters by comparing the smoothed density fluctuations of halos and matter in the same region at different positions as a function of smoothing scale. Alternatively we measure the bias parameters by matching the probablility distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous articles using the same data. We find an overall variation of the linear bias measurements and predictions of $\\sim 5 \\%$ with respect to results from two-point corr...

  4. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  5. Feature Selection, Flaring Size and Time-to-Flare Prediction Using Support Vector Regression, and Automated Prediction of Flaring Behavior Based on Spatio-Temporal Measures Using Hidden Markov Models

    Science.gov (United States)

    Al-Ghraibah, Amani

    Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average

  6. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  7. Measurements and predictions for nonevaporating sprays in a quiescent environment

    Science.gov (United States)

    Solomon, A. S. P.; Shuen, J.-S.; Faeth, G. M.; Zhang, Q.-F.

    1983-01-01

    Yule et al. (1982) have conducted a study of vaporizing sprays with the aid of laser techniques. The present investigation has the objective to supplement the measurements performed by Yule et al., by considering the limiting case of a spray in a stagnant environment. Mean and fluctuating velocities of the continuous phase are measured by means of laser Doppler anemometry (LDA) techniques, while Fraunhofer diffraction and slide impaction methods are employed to determine drop sizes. Liquid fluxes in the spray are found by making use of an isokinetic sampling probe. The obtained data are used as a basis for the evaluation of three models of the process, including a locally homogeneous flow (LHF) model, a deterministic separated flow (DSF) model, and a stochastic separated flow (SSF) model. It is found that the LHF and DSF models do not provide very satisfactory predictions for the test sprays, while the SSF model does provide reasonably good predictions of the observed structure.

  8. Predicting and measuring fluid responsiveness with echocardiography

    Directory of Open Access Journals (Sweden)

    Ashley Miller

    2016-06-01

    Full Text Available Echocardiography is ideally suited to guide fluid resuscitation in critically ill patients. It can be used to assess fluid responsiveness by looking at the left ventricle, aortic outflow, inferior vena cava and right ventricle. Static measurements and dynamic variables based on heart–lung interactions all combine to predict and measure fluid responsiveness and assess response to intravenous fluid esuscitation. Thorough knowledge of these variables, the physiology behind them and the pitfalls in their use allows the echocardiographer to confidently assess these patients and in combination with clinical judgement manage them appropriately.

  9. Integration of the predictions of two models with dose measurements in a case study of children exposed to the emissions of a lead smelter

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, R.; McKone, T.E.

    2009-03-01

    The predictions of two source-to-dose models are systematically evaluated with observed data collected in a village polluted by a currently operating secondary lead smelter. Both models were built up from several sub-models linked together and run using Monte-Carlo simulation, to calculate the distribution children's blood lead levels attributable to the emissions from the facility. The first model system is composed of the CalTOX model linked to a recoded version of the IEUBK model. This system provides the distribution of the media-specific lead concentrations (air, soil, fruit, vegetables and blood) in the whole area investigated. The second model consists of a statistical model to estimate the lead deposition on the ground, a modified version of the model HHRAP and the same recoded version of the IEUBK model. This system provides an estimate of the concentration of exposure of specific individuals living in the study area. The predictions of the first model system were improved in terms of accuracy and precision by performing a sensitivity analysis and using field data to correct the default value provided for the leaf wet density. However, in this case study, the first model system tends to overestimate the exposure due to exposed vegetables. The second model was tested for nine children with contrasting exposure conditions. It managed to capture the blood levels for eight of them. In the last case, the exposure of the child by pathways not considered in the model may explain the failure of the model. The interest of this integrated model is to provide outputs with lower variance than the first model system, but at the moment further tests are necessary to conclude about its accuracy.

  10. Prediction with measurement errors in finite populations.

    Science.gov (United States)

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2012-02-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.

  11. Prediction models and development of an easy to use open-access tool for measuring lung function of individuals with motor complete spinal cord injury

    NARCIS (Netherlands)

    Mueller, Gabi; de Groot, Sonja; van der Woude, Lucas H.; Perret, Claudio; Michel, Franz; Hopman, Maria T. E.

    2012-01-01

    Objective: To develop statistical models to predict lung function and respiratory muscle strength from personal and lesion characteristics of individuals with motor complete spinal cord injury. Design: Cross-sectional, multi-centre cohort study. Subjects: A total of 440 individuals with traumatic, m

  12. Objectively-Measured Impulsivity and Attention-Deficit/Hyperactivity Disorder (ADHD): Testing Competing Predictions from the Working Memory and Behavioral Inhibition Models of ADHD

    Science.gov (United States)

    Raiker, Joseph S.; Rapport, Mark D.; Kofler, Michael J.; Sarver, Dustin E.

    2012-01-01

    Impulsivity is a hallmark of two of the three DSM-IV ADHD subtypes and is associated with myriad adverse outcomes. Limited research, however, is available concerning the mechanisms and processes that contribute to impulsive responding by children with ADHD. The current study tested predictions from two competing models of ADHD--working memory (WM)…

  13. How many longitudinal covariate measurements are needed for risk prediction?

    Science.gov (United States)

    Reinikainen, Jaakko; Karvanen, Juha; Tolonen, Hanna

    2016-01-01

    In epidemiologic follow-up studies, many key covariates, such as smoking, use of medication, blood pressure, and cholesterol, are time varying. Because of practical and financial limitations, time-varying covariates cannot be measured continuously, but only at certain prespecified time points. We study how the number of these longitudinal measurements can be chosen cost-efficiently by evaluating the usefulness of the measurements for risk prediction. The usefulness is addressed by measuring the improvement in model discrimination between models using different amounts of longitudinal information. We use simulated follow-up data and the data from the Finnish East-West study, a follow-up study, with eight longitudinal covariate measurements carried out between 1959 and 1999. In a simulation study, we show how the variability and the hazard ratio of a time-varying covariate are connected to the importance of remeasurements. In the East-West study, it is seen that for older people, the risk predictions obtained using only every other measurement are almost equivalent to the predictions obtained using all eight measurements. Decisions about the study design have significant effects on the costs. The cost-efficiency can be improved by applying the measures of model discrimination to data from previous studies and simulations. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Measures to summarize and compare the predictive capacity of markers.

    Science.gov (United States)

    Gu, Wen; Pepe, Margaret

    2009-10-01

    The predictive capacity of a marker in a population can be described using the population distribution of risk (Huang et al. 2007; Pepe et al. 2008a; Stern 2008). Virtually all standard statistical summaries of predictability and discrimination can be derived from it (Gail and Pfeiffer 2005). The goal of this paper is to develop methods for making inference about risk prediction markers using summary measures derived from the risk distribution. We describe some new clinically motivated summary measures and give new interpretations to some existing statistical measures. Methods for estimating these summary measures are described along with distribution theory that facilitates construction of confidence intervals from data. We show how markers and, more generally, how risk prediction models, can be compared using clinically relevant measures of predictability. The methods are illustrated by application to markers of lung function and nutritional status for predicting subsequent onset of major pulmonary infection in children suffering from cystic fibrosis. Simulation studies show that methods for inference are valid for use in practice.

  15. USING MCSST METHOD FOR MEASURING SEA SURFACE TEMPERATURE WITH MODIS IMAGERY AND MODELING AND PREDICTION OF REGIONAL VARIATIONS WITH LEAST SQUARES METHOD (CASE STUDY: PERSIAN GULF, IRAN

    Directory of Open Access Journals (Sweden)

    M. S. Pakdaman

    2013-10-01

    Full Text Available Nowadays, many researchers in the area of thermal remote sensing applications believe in the necessity of modeling in environmental studies. Modeling in the remotely sensed data and the ability to precisely predict variation of various phenomena, persuaded the experts to use this knowledge increasingly. Suitable model selection is the basis for modeling and is a defining parameter. So, firstly the model should be identified well. The least squares method is for data fitting. In the least squares method, the best fit model is the model that minimizes the sum of squared residuals. In this research, that has been done for modeling variations of the Persian Gulf surface temperature, after data preparation, data gathering has been done with multi-channel method using the MODIS Terra satellites imagery. All the temperature data has been recorded in the period of ten years in winter time from December 2003 to January 2013 with dimensions of 20*20 km and for an area of 400 km2. Subsequently, 12400 temperature samples and variation trend control based on their fluctuation time have been observed. Then 16 mathematical models have been created for model building. After model creation, the variance of all the models has been calculated with ground truth for model testing. But the lowest variance was in combined models from degree 1 to degree 4. The results have shown that outputs for combined models of degree 1 to degree 3 and degree 1 to degree 4 for variables does not show significant differences and implementation of degree 4 does not seem necessary. Employment of trigonometric functions on variables increased the variance in output data. Comparison of the most suitable model and the ground truth showed a variance of just 1⁰. The number of samples, after elimination of blunders reduced to 11600 samples. After this elimination, all the created models have been run on the variables. Also in this case, the highest variance has been obtained for the models

  16. Measurement and prediction of enteric methane emission

    Science.gov (United States)

    Sejian, Veerasamy; Lal, Rattan; Lakritz, Jeffrey; Ezeji, Thaddeus

    2011-01-01

    The greenhouse gas (GHG) emissions from the agricultural sector account for about 25.5% of total global anthropogenic emission. While CO2 receives the most attention as a factor relative to global warming, CH4, N2O and chlorofluorocarbons (CFCs) also cause significant radiative forcing. With the relative global warming potential of 25 compared with CO2, CH4 is one of the most important GHGs. This article reviews the prediction models, estimation methodology and strategies for reducing enteric CH4 emissions. Emission of CH4 in ruminants differs among developed and developing countries, depending on factors like animal species, breed, pH of rumen fluid, ratio of acetate:propionate, methanogen population, composition of diet and amount of concentrate fed. Among the ruminant animals, cattle contribute the most towards the greenhouse effect through methane emission followed by sheep, goats and buffalos, respectively. The estimated CH4 emission rate per cattle, buffaloe, sheep and goat in developed countries are 150.7, 137, 21.9 and 13.7 (g/animal/day) respectively. However, the estimated rates in developing countries are significantly lower at 95.9 and 13.7 (g/animal/day) per cattle and sheep, respectively. There exists a strong interest in developing new and improving the existing CH4 prediction models to identify mitigation strategies for reducing the overall CH4 emissions. A synthesis of the available literature suggests that the mechanistic models are superior to empirical models in accurately predicting the CH4 emission from dairy farms. The latest development in prediction model is the integrated farm system model which is a process-based whole-farm simulation technique. Several techniques are used to quantify enteric CH4 emissions starting from whole animal chambers to sulfur hexafluoride (SF6) tracer techniques. The latest technology developed to estimate CH4 more accurately is the micrometeorological mass difference technique. Because the conditions under which

  17. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    Science.gov (United States)

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  18. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  19. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  20. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Novel modeling of task versus rest brain state predictability using a dynamic time warping spectrum: comparisons and contrasts with other standard measures of brain dynamics

    Directory of Open Access Journals (Sweden)

    Martin eDinov

    2016-05-01

    Full Text Available Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches and the sum squared error (SSE from a multilayer perceptron (MLP prediction of the EEG timeseries, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach.

  2. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  3. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  4. Predicted and measured total electron content over Havana

    Science.gov (United States)

    Ezquer, R. G.; Jakowski, N.; Jadur, C. A.

    1997-03-01

    The total electron content (TEC) of the ionosphere is a parameter often used to check the validity of ionospheric models. Moreover, it is very important for the systems which use transionospheric radio waves. In this paper, TEC measurements obtained with Faraday technique at Havana (23.1°N, 277.5°E, geomagnetic latitude: 34.2°), Cuba, during 1982-1983 period are used to check the validity of two ionospheric models to predict TEC over this station. Havana is a low latitude station but it is not affected by the equatorial anomaly (EA). The models considered are the International Reference Ionosphere (IRI) and a Chapman layer with scale height equal to atomic oxygen scale height (CHOEA). Measured values of the critical frequency of the F2 ionospheric region obtained at San José de Las Lajas (23.0°N, 277.9°E, geomagnetic latitude: 34.1°), Cuba, are used as input coefficients in the models. The results show that the models have different performances as predictors of TEC over Havana. In general, the IRI gives very good predictions for the period around the daily minimum, and for the maximum TEC hours CHOEA is a very good predictor. Taking into account the simplicity of the calculations with CHOEA, this model would be an adequate alternative to predict TEC over stations which are not affected by the EA, as Havana, at least for the solar activity conditions considered.

  5. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  6. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  7. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  8. Comparison of Northern Ireland radon maps based on indoor radon measurements and geology with maps derived by predictive modelling of airborne radiometric and ground permeability data.

    Science.gov (United States)

    Appleton, J D; Miles, J C H; Young, M

    2011-03-15

    Publicly available information about radon potential in Northern Ireland is currently based on indoor radon results averaged over 1-km grid squares, an approach that does not take into account the geological origin of the radon. This study describes a spatially more accurate estimate of the radon potential of Northern Ireland using an integrated radon potential mapping method based on indoor radon measurements and geology that was originally developed for mapping radon potential in England and Wales. A refinement of this method was also investigated using linear regression analysis of a selection of relevant airborne and soil geochemical parameters from the Tellus Project. The most significant independent variables were found to be eU, a parameter derived from airborne gamma spectrometry measurements of radon decay products in the top layer of soil and exposed bedrock, and the permeability of the ground. The radon potential map generated from the Tellus data agrees in many respects with the map based on indoor radon data and geology but there are several areas where radon potential predicted from the airborne radiometric and permeability data is substantially lower. This under-prediction could be caused by the radon concentration being lower in the top 30 cm of the soil than at greater depth, because of the loss of radon from the surface rocks and soils to air. Copyright © 2011. Published by Elsevier B.V.

  9. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  10. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  11. Measurement and prediction of voice support and room gain

    DEFF Research Database (Denmark)

    Pelegrin Garcia, David; Brunskog, Jonas; Lyberg-Åhlander, Viveka;

    2012-01-01

    properties for a speaker: Voice support and room gain. This paper describes the measurement method for these two parameters and presents a prediction model for voice support and room gain derived from the diffuse field theory. The voice support for medium-sized classrooms with volumes between 100 and 250 m3...... and good acoustical quality lies in the range between 14 and 9 dB, whereas the room gain is in the range between 0.2 and 0.5 dB. The prediction model for voice support describes the measurements in the classrooms with a coefficient of determination of 0.84 and a standard deviation of 1.2 dB....

  12. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  13. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Landberg, L. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  14. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  15. Examining the integrity of measurement of cognitive abilities in the prediction of achievement: Comparisons and contrasts across variables from higher-order and bifactor models.

    Science.gov (United States)

    Benson, Nicholas F; Kranzler, John H; Floyd, Randy G

    2016-10-01

    Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models.

  16. Automatic measurement of vowel duration via structured prediction

    Science.gov (United States)

    Adi, Yossi; Keshet, Joseph; Cibelli, Emily; Gustafson, Erin; Clopper, Cynthia; Goldrick, Matthew

    2016-12-01

    A key barrier to making phonetic studies scalable and replicable is the need to rely on subjective, manual annotation. To help meet this challenge, a machine learning algorithm was developed for automatic measurement of a widely used phonetic measure: vowel duration. Manually-annotated data were used to train a model that takes as input an arbitrary length segment of the acoustic signal containing a single vowel that is preceded and followed by consonants and outputs the duration of the vowel. The model is based on the structured prediction framework. The input signal and a hypothesized set of a vowel's onset and offset are mapped to an abstract vector space by a set of acoustic feature functions. The learning algorithm is trained in this space to minimize the difference in expectations between predicted and manually-measured vowel durations. The trained model can then automatically estimate vowel durations without phonetic or orthographic transcription. Results comparing the model to three sets of manually annotated data suggest it out-performed the current gold standard for duration measurement, an HMM-based forced aligner (which requires orthographic or phonetic transcription as an input).

  17. The regional prediction model of PM10 concentrations for Turkey

    Science.gov (United States)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  18. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.

  19. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  20. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  1. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  2. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  3. Measuring and modelling concurrency

    Directory of Open Access Journals (Sweden)

    Larry Sawers

    2013-02-01

    Full Text Available This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships: measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case.

  4. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  5. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  6. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  7. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  8. Linear and non-linear bias: predictions versus measurements

    Science.gov (United States)

    Hoffmann, K.; Bel, J.; Gaztañaga, E.

    2017-02-01

    We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.

  9. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  10. Comparison between genetic parameters of cheese yield and nutrient recovery or whey loss traits measured from individual model cheese-making methods or predicted from unprocessed bovine milk samples using Fourier-transform infrared spectroscopy.

    Science.gov (United States)

    Bittante, G; Ferragina, A; Cipolat-Gotet, C; Cecchinato, A

    2014-10-01

    Cheese yield is an important technological trait in the dairy industry. The aim of this study was to infer the genetic parameters of some cheese yield-related traits predicted using Fourier-transform infrared (FTIR) spectral analysis and compare the results with those obtained using an individual model cheese-producing procedure. A total of 1,264 model cheeses were produced using 1,500-mL milk samples collected from individual Brown Swiss cows, and individual measurements were taken for 10 traits: 3 cheese yield traits (fresh curd, curd total solids, and curd water as a percent of the weight of the processed milk), 4 milk nutrient recovery traits (fat, protein, total solids, and energy of the curd as a percent of the same nutrient in the processed milk), and 3 daily cheese production traits per cow (fresh curd, total solids, and water weight of the curd). Each unprocessed milk sample was analyzed using a MilkoScan FT6000 (Foss, Hillerød, Denmark) over the spectral range, from 5,000 to 900 wavenumber × cm(-1). The FTIR spectrum-based prediction models for the previously mentioned traits were developed using modified partial least-square regression. Cross-validation of the whole data set yielded coefficients of determination between the predicted and measured values in cross-validation of 0.65 to 0.95 for all traits, except for the recovery of fat (0.41). A 3-fold external validation was also used, in which the available data were partitioned into 2 subsets: a training set (one-third of the herds) and a testing set (two-thirds). The training set was used to develop calibration equations, whereas the testing subsets were used for external validation of the calibration equations and to estimate the heritabilities and genetic correlations of the measured and FTIR-predicted phenotypes. The coefficients of determination between the predicted and measured values in cross-validation results obtained from the training sets were very similar to those obtained from the whole

  11. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  12. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  13. Insightful Measures of Predictive Skill in Seasonal Forecasts

    Science.gov (United States)

    Du, H.; Niehorster, F.; Smith, L. A.

    2012-04-01

    Doubt is cast on the common claim that an ensemble of models outperforms a single best model; the source of this misperception being the use of inappropriate measure of skill. The use of such measures can lead to a loss of information presented to the decision maker. The skill of probability forecasts of the Nino 3.4 index & the Sea Surface Temperature (SST) in the Main Development Region (MDR) based upon the ENSEMBLES seasonal simulations are considered. Issues in the interpretation of probability forecasts based on these multi-model ensemble simulations are addressed. The predictive distributions considered are formed by kernel dressing the ensemble and blending with the climatology; hindcasts from 1960 to 2005 are constructed. The sources of apparent skill (typical linear skill measures like Root Mean Squares and correlation skill) in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve such scores. This casts doubt on one common justification for the claim that all models should be included in forming an operational PDF. The sources of skill from multi-model ensemble are discussed. True cross validation is also shown to be important given the small sample size available in seasonal forecasting. Probabilistic skill is shown to be robust out to 8 months for the Nino 3.4 index and out to month 2 for MDR SSTs. Are ensembles of models more fit for decision making than an initial condition ensemble of the "best" model? Results using a proper skill score show the multi-model ensembles do not significantly outperform a single model ensemble for Nino 3.4. Situations in which ensemble over model structures outperforms comparable ensemble using the "best" model structure are requested.

  14. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  15. Can foot anthropometric measurements predict dynamic plantar surface contact area?

    Directory of Open Access Journals (Sweden)

    Collins Natalie

    2009-10-01

    Full Text Available Abstract Background Previous studies have suggested that increased plantar surface area, associated with pes planus, is a risk factor for the development of lower extremity overuse injuries. The intent of this study was to determine if a single or combination of foot anthropometric measures could be used to predict plantar surface area. Methods Six foot measurements were collected on 155 subjects (97 females, 58 males, mean age 24.5 ± 3.5 years. The measurements as well as one ratio were entered into a stepwise regression analysis to determine the optimal set of measurements associated with total plantar contact area either including or excluding the toe region. The predicted values were used to calculate plantar surface area and were compared to the actual values obtained dynamically using a pressure sensor platform. Results A three variable model was found to describe the relationship between the foot measures/ratio and total plantar contact area (R2 = 0.77, p R2 = 0.76, p Conclusion The results of this study indicate that the clinician can use a combination of simple, reliable, and time efficient foot anthropometric measurements to explain over 75% of the plantar surface contact area, either including or excluding the toe region.

  16. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  17. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  18. Prediction of Gestational Diabetes by Measuring First Trimester ...

    African Journals Online (AJOL)

    Prediction of Gestational Diabetes by Measuring First Trimester. Maternal Serum ... glomerular filtration rate or reduced proximal tubular. Original ... Uric acid in prediction of GDM ..... Source of Support: Nil, Conflict of Interest: None declared.

  19. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    Directory of Open Access Journals (Sweden)

    María Victoria Casares

    2012-01-01

    Full Text Available In order to determine copper toxicity (LC50 to a local species (Cnesterodon decemmaculatus in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L-1. The 96 h Cu LC50 calculated was 0.655 mg L-1 (0.823-0.488. 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L-1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test.

  20. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    Science.gov (United States)

    Angelantonio, Emanuele Di; Gao, Pei; Khan, Hassan; Butterworth, Adam S.; Wormser, David; Kaptoge, Stephen; Kondapally Seshasai, Sreenivasa Rao; Thompson, Alex; Sarwar, Nadeem; Willeit, Peter; Ridker, Paul M; Barr, Elizabeth L.M.; Khaw, Kay-Tee; Psaty, Bruce M.; Brenner, Hermann; Balkau, Beverley; Dekker, Jacqueline M.; Lawlor, Debbie A.; Daimon, Makoto; Willeit, Johann; Njølstad, Inger; Nissinen, Aulikki; Brunner, Eric J.; Kuller, Lewis H.; Price, Jackie F.; Sundström, Johan; Knuiman, Matthew W.; Feskens, Edith J. M.; Verschuren, W. M. M.; Wald, Nicholas; Bakker, Stephan J. L.; Whincup, Peter H.; Ford, Ian; Goldbourt, Uri; Gómez-de-la-Cámara, Agustín; Gallacher, John; Simons, Leon A.; Rosengren, Annika; Sutherland, Susan E.; Björkelund, Cecilia; Blazer, Dan G.; Wassertheil-Smoller, Sylvia; Onat, Altan; Marín Ibañez, Alejandro; Casiglia, Edoardo; Jukema, J. Wouter; Simpson, Lara M.; Giampaoli, Simona; Nordestgaard, Børge G.; Selmer, Randi; Wennberg, Patrik; Kauhanen, Jussi; Salonen, Jukka T.; Dankner, Rachel; Barrett-Connor, Elizabeth; Kavousi, Maryam; Gudnason, Vilmundur; Evans, Denis; Wallace, Robert B.; Cushman, Mary; D’Agostino, Ralph B.; Umans, Jason G.; Kiyohara, Yutaka; Nakagawa, Hidaeki; Sato, Shinichi; Gillum, Richard F.; Folsom, Aaron R.; van der Schouw, Yvonne T.; Moons, Karel G.; Griffin, Simon J.; Sattar, Naveed; Wareham, Nicholas J.; Selvin, Elizabeth; Thompson, Simon G.; Danesh, John

    2015-01-01

    IMPORTANCE The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of cardiovascular disease (CVD) risk. DESIGN, SETTING, AND PARTICIPANTS Analysis of individual-participant data available from 73 prospective studies involving 294 998 participants without a known history of diabetes mellitus or CVD at the baseline assessment. MAIN OUTCOMES AND MEASURES Measures of risk discrimination for CVD outcomes (eg, C-index) and reclassification (eg, net reclassification improvement) of participants across predicted 10-year risk categories of low (<5%), intermediate (5%to <7.5%), and high (≥7.5%) risk. RESULTS During a median follow-up of 9.9 (interquartile range, 7.6-13.2) years, 20 840 incident fatal and nonfatal CVD outcomes (13 237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values and CVD risk changed only slightly after adjustment for total cholesterol and triglyceride concentrations or estimated glomerular filtration rate, but this association attenuated somewhat after adjustment for concentrations of high-density lipoprotein cholesterol and C-reactive protein. The C-index for a CVD risk prediction model containing conventional cardiovascular risk factors alone was 0.7434 (95% CI, 0.7350 to 0.7517). The addition of information on HbA1c was associated with a C-index change of 0.0018 (0.0003 to 0.0033) and a net reclassification improvement of 0.42 (−0.63 to 1.48) for the categories of predicted 10-year CVD risk. The improvement provided by HbA1c assessment in prediction of CVD risk was equal to or better than estimated improvements for

  1. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  2. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  3. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  4. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  5. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  6. Measurement and prediction of pork colour.

    Science.gov (United States)

    Van Oeckel, M J; Warnants, N; Boucqué, C V

    1999-08-01

    The extent to which instrumental colour determinations by FOPu (light scattering), Göfo (reflectance) and Labscan II (CIE L*, CIE a* and CIE b*, hue and chroma) are related to the Japanese colour grades was studied. Additionally, four on-line methods: pH1, FOP1, PQM1 (conductivity) and DDLT (Double Density Light Transmission, analogous to Capteur Gras/Maigre), were evaluated for their ability to predict subjectively and objectively colour. One hundred and twenty samples of m. longissimus thoracis et lumborum, from animals of different genotypes, were analysed. Of the instrumental colour determinations, CIE L* (r=-0.82), FOPu (r=-0.70) and Göfo (r=0.70) were best correlated with the Japanese colour scores. The Japanese colour grades could be predicted by the on-line instruments, pH1, FOP1, PQM1 and DDLT, with determination coefficients between 15 and 28%. Ultimate meat colour, determined by Japanese colour standards, FOPu, Göfo and CIE L*, was better predicted by DDLT than by the classic on-line instruments: FOP1, pH1 and PQM1, although the standard error of the estimate was similar for all instruments. This means that DDLT, although originally designed for estimating lean meat percentage, can additionally give information about meat quality, in particular colour. However, it must be stressed that the colour estimate by DDLT refers to a population of animals, rather than to individual pigs, because of the number of erroneously assigned samples.

  7. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  8. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  9. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...... connectivity approach. The development of these models requires measured property data and based on them, the regression of model parameters is performed. Although this class of models is empirical by nature, they do allow extrapolation from the regressed model parameters to predict properties of chemicals...... not included in the measured data-set. Therefore, they are also considered as predictive models. The paper will highlight different issues/challenges related to the role of the databases and the mathematical and thermodynamic consistency of the measured/estimated data, the predictive nature of the developed...

  10. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  11. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  12. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  13. Lipid measures and cardiovascular disease prediction

    NARCIS (Netherlands)

    van Wijk, D.F.; Stroes, E.S.G.; Kastelein, J.J.P.

    2009-01-01

    Traditional lipid measures are the cornerstone of risk assessment and treatment goals in cardiovascular prevention. Whereas the association between total, LDL-, HDL-cholesterol and cardiovascular disease risk has been generally acknowledged, the rather poor capacity to distinguish between patients

  14. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Lipid Measures and Cardiovascular Disease Prediction

    OpenAIRE

    van Wijk, Diederik F.; Stroes, Erik S. G.; Kastelein, John J.P.

    2009-01-01

    Traditional lipid measures are the cornerstone of risk assessment and treatment goals in cardiovascular prevention. Whereas the association between total, LDL-, HDL-cholesterol and cardiovascular disease risk has been generally acknowledged, the rather poor capacity to distinguish between patients who will and those who will not develop cardiovascular disease has prompted the search for further refinement of these traditional measures. A thorough understanding of lipid metabolism is mandatory...

  7. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  8. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  9. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  10. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  11. Measuring Active Learning to Predict Course Quality

    Science.gov (United States)

    Taylor, John E.; Ku, Heng-Yu

    2011-01-01

    This study investigated whether active learning within computer-based training courses can be measured and whether it serves as a predictor of learner-perceived course quality. A major corporation participated in this research, providing access to internal employee training courses, training representatives, and historical course evaluation data.…

  12. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  13. Population Propensity Measurement Model

    Science.gov (United States)

    1993-12-01

    school DQ702 Taken elementary algebra DQ703 Taken plane geometry DQ70 Taken computer science DQ706 Taken intermediate algebra DQ707 Taken trigonometry ...with separate models for distributing the arrival of applicants over FY’s, quarters, or months. The primary obstacle in these models is shifting the...to ŕ" = Otherwise DQ706 Binary: 1 = Taken intermediate Q706 is equal to ŕ" algebra, 0 = Otherwise DQ707 Binary: 1 = Taken trigonometry , 0 = Q707 is

  14. Critical Zone Observatories (CZOs): Integrating measurements and models of Earth surface processes to improve prediction of landscape structure, function and evolution

    Science.gov (United States)

    Chorover, J.; Anderson, S. P.; Bales, R. C.; Duffy, C.; Scatena, F. N.; Sparks, D. L.; White, T.

    2012-12-01

    The "Critical Zone" - that portion of Earth's land surface that extends from the outer periphery of the vegetation canopy to the lower limit of circulating groundwater - has evolved in response to climatic and tectonic forcing throughout Earth's history, but human activities have recently emerged as a major agent of change as well. With funding from NSF, a network of currently six CZOs is being developed in the U.S. to provide infrastructure, data and models that facilitate understanding the evolution, structure, and function of this zone at watershed to grain scales. Each CZO is motivated by a unique set of hypotheses proposed by a specific investigator team, but coordination of cross-site activities is also leading to integration of a common set of multi-disciplinary tools and approaches for cross-site syntheses. The resulting harmonized four-dimensional datasets are intended to facilitate community-wide exploration of process couplings among hydrology, ecology, soil science, geochemistry and geomorphology across the larger (network-scale) parameter space. Such an approach enables testing of the generalizability of findings at a given site, and also of emergent hypotheses conceived independently of an original CZO investigator team. This two-pronged method for developing a network of individual CZOs across a range of watershed systems is now yielding novel observations and models that resolve mechanisms for Critical Zone change occurring on geological to hydrologic time-scales. For example, recent advances include improved understanding of (i) how mass and energy flux as modulated by ecosystem exchange transforms bedrock to structured, soil-mantled and/or erosive landscapes; (ii) how long-term evolution of landscape structure affects event-based hydrologic and biogeochemical response at pore to catchment scales; (iii) how complementary isotopic measurements can be used to resolve pathways and time scales of water and solute transport from canopy to stream, and

  15. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  16. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  17. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  18. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    Science.gov (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  19. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  20. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  1. A Prediction Model of MF Radiation in Environmental Assessment

    Institute of Scientific and Technical Information of China (English)

    HE-SHAN GE; YAN-FENG HONG

    2006-01-01

    Objective To predict the impact of MF radiation on human health.Methods The vertical distribution of field intensity was estimated by analogism on the basis of measured values from simulation measurement. Results A kind of analogism on the basis of geometric proportion decay pattern is put forward in the essay. It showed that with increasing of height the field intensity increased according to geometric proportion law. Conclusion This geometric proportion prediction model can be used to estimate the impact of MF radiation on inhabited environment, and can act as a reference pattern in predicting the environmental impact level of MF radiation.

  2. Hybrid video quality prediction: reviewing video quality measurement for widening application scope

    OpenAIRE

    Barkowsky, Marcus; Sedano, Inigo; Brunnstrom, Kjell; Leszczuk, Mikolaj; Staelens, Nicolas

    2015-01-01

    A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms....

  3. Modelling and prediction of non-stationary optical turbulence behaviour

    Science.gov (United States)

    Doelman, Niek; Osborn, James

    2016-07-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.

  4. Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

    Science.gov (United States)

    Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd

    2017-03-01

    Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.

  5. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  6. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  7. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  8. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  9. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  10. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  11. Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.

    Science.gov (United States)

    Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei

    2015-01-01

    Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.

  12. Muon polarization in the MEG experiment: predictions and measurements

    CERN Document Server

    Baldini, A M; Baracchini, E; Bemporad, C; Berg, F; Biasotti, M; Boca, G; Cattaneo, P W; Cavoto, G; Cei, F; Chiarello, G; Chiri, C; De Bari, A; De Gerone, M; DÓnofrio, A; Dussoni, S; Fujii, Y; Galli, L; Gatti, F; Grancagnolo, F; Grassi, M; Graziosi, A; Grigoriev, D N; Haruyama, T; Hildebrandt, M; Hodge, Z; Ieki, K; Ignatov, F; Iwamoto, T; Kaneko, D; Kang, T I; Kettle, P R; Khazin, B I; Khomutov, N; Korenchenko, A; Kravchuk, N; Lim, G M A; Mihara, S; Molzon, W; Mori, T; Mtchedlishvili, A; Nakaura, S; Nicolò, D; Nishiguchi, H; Nishimura, M; Ogawa, S; Ootani, W; Panareo, M; Papa, A; Pepino, A; Piredda, G; Pizzigoni, G; Popov, A; Renga, F; Ripiccini, E; Ritt, S; Rossella, M; Rutar, G; Sawada, R; Sergiampietri, F; Signorelli, G; Tassielli, G; Tenchini, F; Uchiyama, Y; Venturini, M; Voena, C; Yamamoto, A; Yoshida, K; You, Z; Yudin, Y V

    2015-01-01

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process $\\mu^{+} \\rightarrow {\\rm e}^{+} \\gamma$. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be $P_{\\mu} = -1$ by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be $P_{\\mu} = -0.85 \\pm 0.03 ~ {\\rm (stat)} ~ { }^{+ 0.04}_{-0.05} ~ {\\rm (syst)}$ at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our ${\\megsign}$ search induced by the muon radiative decay: $\\mu^{+} \\rightarrow {\\r...

  13. Muon polarization in the MEG experiment: predictions and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Dussoni, S.; Galli, L.; Grassi, M.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Hildebrandt, M.; Kettle, P.R.; Mtchedlishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [University of Tokyo, ICEPP, Tokyo (Japan); INFN, Laboratori Nazionali di Frascati, Rome (Italy); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [INFN Sezione di Pisa, Pisa (Italy); Pisa Univ., Dipartimento di Fisica, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genova (Italy); Genova Univ., Dipartimento di Fisica, Genova (Italy); Boca, G.; De Bari, A. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica, Pavia (Italy); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Renga, F.; Voena, C. [Univ. ' ' Sapienza' ' , Rome (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Univ. del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); Chiri, C.; Grancagnolo, F.; Tassielli, G.F. [Univ. del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); De Gerone, M. [Genova Univ. (Italy); INFN Sezione di Genova, Genova (Italy); Fujii, Y.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [University of Tokyo, ICEPP, Tokyo (Japan); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Ieki, K. [Paul Scherrer Institut PSI, Villigen (Switzerland); University of Tokyo, ICEPP, Tokyo (Japan); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics of Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z. [University of California, Irvine, CA (United States); Khomutov, N.; Korenchenko, A.; Kravchuk, N. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Venturini, M. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Scuola Normale Superiore, Pisa (Italy); Collaboration: The MEG Collaboration

    2016-04-15

    The MEG experiment makes use of one of the world's most intense low energy muon beams, in order to search for the lepton flavour violating process μ{sup +} → e{sup +}γ. We determined the residual beam polarization at the thin stopping target, by measuring the asymmetry of the angular distribution of Michel decay positrons as a function of energy. The initial muon beam polarization at the production is predicted to be P{sub μ} = -1 by the Standard Model (SM) with massless neutrinos. We estimated our residual muon polarization to be P{sub μ} =.0.86 ± 0.02 (stat){sub -0.06}{sup +0.05} (syst) at the stopping target, which is consistent with the SM predictions when the depolarizing effects occurring during the muon production, propagation and moderation in the target are taken into account. The knowledge of beam polarization is of fundamental importance in order to model the background of our μ{sup +} → e{sup +}γ search induced by the muon radiative decay: μ{sup +} → e{sup +} anti ν{sub μ}ν{sub e}γ. (orig.)

  14. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  15. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  16. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  17. Prediction and measurement of thermally induced cambial tissue necrosis in tree stems

    Science.gov (United States)

    Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova

    2006-01-01

    A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...

  18. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  19. Do measures of working memory predict academic proficiency better than measures of intelligence?

    Directory of Open Access Journals (Sweden)

    KERRY LEE

    2009-12-01

    Full Text Available It is often asserted that working memory predicts more variance in academic proficiency than do measures of intelligence. We used data from three studies to show that the validity of this assertion is highly dependent on the method of analysis. Using the same measures of intelligence, but different measures of working memory and algebraic proficiency, we found working memory provided better explanatory power only when analysis was conducted on the observed variable level. When the same data were analysed using structural equation models, only measures of intelligence had a direct effect on algebraic proficiency. From a theoretical viewpoint, our findings are consistent with a claim that working memory is a constituent component of (fluid intelligence.

  20. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    NARCIS (Netherlands)

    Di Angelantonio, Emanuele; Gao, Pei; Khan, Hassan; Butterworth, Adam S.; Wormser, David; Kaptoge, Stephen; Seshasai, Sreenivasa Rao Kondapally; Thompson, Alex; Sarwar, Nadeem; Willeit, Peter; Ridker, Paul M.; Barr, Elizabeth L. M.; Khaw, Kay-Tee; Psaty, Bruce M.; Brenner, Hermann; Balkau, Beverley; Dekker, Jacqueline M.; Lawlor, Debbie A.; Daimon, Makoto; Willeit, Johann; Njolstad, Inger; Nissinen, Aulikki; Brunner, Eric J.; Kuller, Lewis H.; Price, Jackie F.; Sundstrom, Johan; Knuiman, Matthew W.; Feskens, Edith J. M.; Verschuren, W. M. M.; Wald, Nicholas; Bakker, Stephan J. L.; Whincup, Peter H.; Ford, Ian; Goldbourt, Uri; Gomez-de-la-Camara, Agustin; Gallacher, John; Simons, Leon A.; Rosengren, Annika; Sutherland, Susan E.; Bjorkelund, Cecilia; Blazer, Dan G.; Wassertheil-Smoller, Sylvia; Onat, Altan; Ibanez, Alejandro Marin; Casiglia, Edoardo; Jukema, J. Wouter; Simpson, Lara M.; Giampaoli, Simona; Nordestgaard, Borge G.; Selmer, Randi; Wennberg, Patrik; Kauhanen, Jussi; Salonen, Jukka T.; Dankner, Rachel; Barrett-Connor, Elizabeth; Kavousi, Maryam; Gudnason, Vilmundur; Evans, Denis; Wallace, Robert B.; Cushman, Mary; D'Agostino, Ralph B.; Umans, Jason G.; Kiyohara, Yutaka; Nakagawa, Hidaeki; Sato, Shinichi; Gillum, Richard F.; Folsom, Aaron R.; van der Schouw, Yvonne T.; Moons, Karel G.; Griffin, Simon J.; Sattar, Naveed; Wareham, Nicholas J.; Selvin, Elizabeth; Thompson, Simon G.; Danesh, John

    2014-01-01

    IMPORTANCE The value of measuring levels of glycated hemoglobin (HbA(1c)) for the prediction of first cardiovascular events is uncertain. OBJECTIVE To determine whether adding information on HbA(1c) values to conventional cardiovascular risk factors is associated with improvement in prediction of ca

  1. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  2. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Measuring reflective-band imaging systems for performance prediction

    Science.gov (United States)

    Slonopas, Andre; Preece, Bradley L.; Haefner, David P.

    2017-05-01

    An objective performance of the reflective-band imaging systems is required in order to provide the warfighter with the right technology for a specific task. Various methods to measure and model performance in the visible (Vis) spectral regions have been proposed in the literature. This correspondence shows the influence of the spectral region averaging on the monochromatic modulation transfer function (MTF). This works unequivocally shows that the illumination source plays a crucial role in the accurate predictive analysis of the system performance. For accurate analysis the illumination sources need to be carefully considered for the atmospheric conditions. This work shows the possibility of using an LED configuration in the system performance analysis. Such configurations need rigorous calibration in order to become a valuable asset in system characterization.

  4. Predicting individual variation in language from infant speech perception measures

    NARCIS (Netherlands)

    A. Christia; A. Seidl; C. Junge; M. Soderstrom; P. Hagoort

    2013-01-01

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theore

  5. Predicting Individual Variation in Language From Infant Speech Perception Measures

    NARCIS (Netherlands)

    Cristia, A.; Seidl, A.; Junge, C.M.M.; Soderstrom, M.; Hagoort, P.

    2014-01-01

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theore

  6. Prediction of particle type given measurements of particle location

    CERN Document Server

    Johnson, Robert W

    2012-01-01

    The MaxEnt approach to the prediction of particle type given measurements of particle location is explored. Two types of particle are considered, and locations are expressed in terms of a single spatial coordinate. Several cases corresponding to different states of prior knowledge are evaluated. Predictions are calculated using the expectation value, which is the average of the observable weighted by the evidence measure over the parameter manifold. How the methodology extends to more general situations is discussed.

  7. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis....... While use of experimentally measured values of the needed properties is desirable in these tasks, the experimental data of the properties of interest may not be available or may not be measurable in many cases. Therefore, property models that are reliable, predictive and easy to use are necessary....... However, which models should be used to provide the reliable estimates of the required properties? And, how much measured data is necessary to regress the model parameters? How to ensure predictive capabilities in the developed models? Also, as it is necessary to know the associated uncertainties...

  8. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  9. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  10. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  11. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  12. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H.M.; Veltink, P.H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  13. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H. Martin; Veltink, Petrus H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  14. A prediction model for ocular damage - Experimental validation.

    Science.gov (United States)

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  16. Predicting Category Intuitiveness with the Rational Model, the Simplicity Model, and the Generalized Context Model

    Science.gov (United States)

    Pothos, Emmanuel M.; Bailey, Todd M.

    2009-01-01

    Naive observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported…

  17. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  18. Reflectance spectroscopy of gold nanoshells: computational predictions and experimental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Alex W. H.; Lewinski, Nastassja A.; Lee, Min-Ho; Drezek, Rebekah A. [Rice University, Department of Bioengineering (United States)], E-mail: drezek@rice.edu

    2006-10-15

    Gold nanoshells are concentric spherical constructs that possess highly desirable optical responses in the near infrared. Gold nanoshells consist of a thin outer gold shell and a silica core and can be used for both diagnostic and therapeutic purposes by tuning the optical response through changing the core-shell ratio as well as the overall size. Although optical properties of gold nanoshells have already been well documented, the reflectance characteristics are not well understood and have not yet been elucidated by experimental measurements. Yet, in order to use gold nanoshells as an optical contrast agent for scattering-based optical methods such as reflectance spectroscopy, it is critical to characterize the reflectance behavior. With this in mind, we used a fiber-optic-based spectrometer to measure diffuse reflectance of gold nanoshell suspensions from 500 nm to 900 nm. Experimental results show that gold nanoshells cause a significant increase in the measured reflectance. Spectral features associated with scattering from large angles ({approx}180 deg.) were observed at low nanoshell concentrations. Monte Carlo modeling of gold nanoshells reflectance demonstrated the efficacy of using such methods to predict diffuse reflectance. Our studies suggest that gold nanoshells are an excellent candidate as optical contrast agents and that Monte Carlo methods are a useful tool for optimizing nanoshells best suited for scattering-based optical methods.

  19. Reflectance spectroscopy of gold nanoshells: computational predictions and experimental measurements

    Science.gov (United States)

    Lin, Alex W. H.; Lewinski, Nastassja A.; Lee, Min-Ho; Drezek, Rebekah A.

    2006-10-01

    Gold nanoshells are concentric spherical constructs that possess highly desirable optical responses in the near infrared. Gold nanoshells consist of a thin outer gold shell and a silica core and can be used for both diagnostic and therapeutic purposes by tuning the optical response through changing the core-shell ratio as well as the overall size. Although optical properties of gold nanoshells have already been well documented, the reflectance characteristics are not well understood and have not yet been elucidated by experimental measurements. Yet, in order to use gold nanoshells as an optical contrast agent for scattering-based optical methods such as reflectance spectroscopy, it is critical to characterize the reflectance behavior. With this in mind, we used a fiber-optic-based spectrometer to measure diffuse reflectance of gold nanoshell suspensions from 500 nm to 900 nm. Experimental results show that gold nanoshells cause a significant increase in the measured reflectance. Spectral features associated with scattering from large angles ( 180°) were observed at low nanoshell concentrations. Monte Carlo modeling of gold nanoshells reflectance demonstrated the efficacy of using such methods to predict diffuse reflectance. Our studies suggest that gold nanoshells are an excellent candidate as optical contrast agents and that Monte Carlo methods are a useful tool for optimizing nanoshells best suited for scattering-based optical methods.

  20. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  1. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  2. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  3. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  4. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  5. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  6. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  7. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  8. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  9. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  10. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  11. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  12. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  13. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  14. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  15. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  16. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  17. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  18. Beware of R2: simple, unambiguous assessment of the prediction accuracy of QSAR and QSPR models

    OpenAIRE

    Alexander, D. L. J.; Tropsha, A; Winkler, David A.

    2015-01-01

    The statistical metrics used to characterize the external predictivity of a model, i.e., how well it predicts the properties of an independent test set, have proliferated over the past decade. This paper clarifies some apparent confusion over the use of the coefficient of determination, R2, as a measure of model fit and predictive power in QSAR and QSPR modelling.

  19. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  20. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  1. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  2. Signature prediction for model-based automatic target recognition

    Science.gov (United States)

    Keydel, Eric R.; Lee, Shung W.

    1996-06-01

    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  3. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  4. The prediction of transducer element performance from in air measurements

    Science.gov (United States)

    Schafer, M. E.

    1982-01-01

    A technique has been developed which accurately predicts the performance of underwater acoustic arrays prior to array construction. The technique is based upon the measurement of lumped-parameter equivalent circuit values for each element in the array, and is accurate in predicting the array transmit, receive and beam pattern response. The measurement procedure determines the shunt electrical and motional circuit elements from electrical imittance measurements. The electromechanical transformation ratio is derived from in-air measurements of the radiating face velocity and the input current to the transducer at resonance. The equivalent circuit values of a group of Tonpilz-type transducers were measured, and the self and mutual interaction acoustic loadings for a specific array geometry were calculated. The response of the elements was then predicted for water-loaded array conditions. Based on the predictions, a selection scheme was developed which minimized the effects of inter-element variability on array performance. The measured transmitting, receiving and beam pattern characteristics of a test array, built using the selected elements, were compared to predictions made before the array was built. The results indicated that the technique is accurate over a wide frequency range.

  5. Cathode Strip Chambers (CSC) Sag Measurements and Predictions

    CERN Document Server

    Kriesel, K; Loveless, D

    1997-01-01

    We describe the measurements of sag on P1A 2 layer prototype Cathode Strip Chamber and compare the results with calculated values. Using this information we predict the sag of P1 6-layer chamber with the present design for the aluminium frame, and compare this value to measured sag.

  6. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  7. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  8. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  9. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  10. The Measurement and Prediction of Rotordynamic Forces for Labyrinth Seals

    Science.gov (United States)

    1988-03-01

    AFOSRlM- 88-0 662 C-" DTIC FILE COPy THE MEASUREMENT AND PREDICTION OF I ROTORDYNAMIC FORCES FOR . ,LABYRINTH SEALS prepared by D. W. Childs D. L...FRMorcesT123/ 18, MARCH k 7~ 1 A 19. ABSTRACT ICniu IT JECTr TRM necesoary ond identif if R&iocftr an um rbyblckn %I I _I Measurements of rotordynamic ...0 FORM 1472, 83 APR EOITiC’q OF I iA 71 15 )SOLETF O~p - ...... 0 THE MEASUREMENT AND PREDICTION OF ROTORDYNAMIC FORCES FOR LABYRINTH SEALS prepared

  11. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  12. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  13. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  14. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  15. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  16. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  17. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  18. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  19. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  20. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  1. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  2. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  3. Petrophysical properties of greensand as predicted from NMR measurements

    DEFF Research Database (Denmark)

    Hossain, Zakir; Grattoni, Carlos A.; Solymar, Mikael

    2011-01-01

    ABSTRACT: Nuclear magnetic resonance (NMR) is a useful tool in reservoir evaluation. The objective of this study is to predict petrophysical properties from NMR T2 distributions. A series of laboratory experiments including core analysis, capillary pressure measurements, NMR T2 measurements...... and image analysis were carried out on sixteen greensand samples from two formations in the Nini field of the North Sea. Hermod Formation is weakly cemented, whereas Ty Formation is characterized by microcrystalline quartz cement. The surface area measured by the BET method and the NMR derived surface...... with macro-pores. Permeability may be predicted from NMR by using Kozeny's equation when surface relaxivity is known. Capillary pressure drainage curves may be predicted from NMR T2 distribution when pore size distribution within a sample is homogeneous....

  4. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  5. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  6. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  7. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  8. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  9. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  10. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  12. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  13. Third trimester ultrasound soft-tissue measurements accurately predicts macrosomia.

    Science.gov (United States)

    Maruotti, Giuseppe Maria; Saccone, Gabriele; Martinelli, Pasquale

    2017-04-01

    To evaluate the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. Electronic databases were searched from their inception until September 2015 with no limit for language. We included only studies assessing the accuracy of sonographic measurements of fetal soft tissue in the abdomen or thigh in the prediction of macrosomia  ≥34 weeks of gestation. The primary outcome was the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. We generated the forest plot for the pooled sensitivity and specificity with 95% confidence interval (CI). Additionally, summary receiver-operating characteristics (ROC) curves were plotted and the area under the curve (AUC) was also computed to evaluate the overall performance of the diagnostic test accuracy. Three studies, including 287 singleton gestations, were analyzed. The pooled sensitivity of sonographic measurements of abdominal or thigh fetal soft tissue in the prediction of macrosomia was 80% (95% CI: 66-89%) and the pooled specificity was 95% (95% CI: 91-97%). The AUC for diagnostic accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia was 0.92 and suggested high diagnostic accuracy. Third-trimester sonographic measurements of fetal soft tissue after 34 weeks may help to detect macrosomia with a high degree of accuracy. The pooled detection rate was 80%. A standardization of measurements criteria, reproducibility, building reference charts of fetal subcutaneous tissue and large studies to assess the optimal cutoff of fetal adipose thickness are necessary before the introduction of fetal soft-tissue markers in the clinical practice.

  14. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  15. Evaluation of models to predict insolation on tilted surfaces

    Science.gov (United States)

    Klucher, T. M.

    1979-01-01

    An empirical study was performed to evaluate the validity of various insolation models which employ either an isotropic or an anisotropic distribution approximation for sky light when predicting insolation on tilted surfaces. Data sets of measured hourly insolation values were obtained over a 6-month period using pyranometers which received diffuse and total solar radiation on a horizontal plane and total radiation on surfaces tilted toward the equator at 37 degrees and 60 degrees angles above the horizon. Data on the horizontal surfaces were used in the insolation models to predict insolation on the tilted surface; comparisons of measured vs calculated insolation on the tilted surface were examined to test the validity of the sky light approximations. It was found that the Liu-Jordan isotropic distribution model provides a good fit to empirical data under overcast skies but underestimates the amount of solar radiation incident on tilted surfaces under clear and partly cloudy conditions.

  16. Frequency weighted model predictive control of wind turbine

    DEFF Research Database (Denmark)

    Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood;

    2013-01-01

    This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...... accordingly. In practice is very hard to measure the effective wind speed, this quantity will be estimated using measurements from the turbine itself. For this purpose stationary predictive Kalman filter has been used. Stochastic simulations of the wind turbine behaviour with applied frequency weighted model...

  17. Adaptive quality prediction of batch processes based on PLS model

    Institute of Scientific and Technical Information of China (English)

    LI Chun-fu; ZHANG Jie; WANG Gui-zeng

    2006-01-01

    There are usually no on-line product quality measurements in batch and semi-batch processes,which make the process control task very difficult.In this paper,a model for predicting the end-product quality from the available on-line process variables at the early stage of a batch is developed using partial least squares (PLS)method.Furthermore,some available mid-course quality measurements are used to rectify the final prediction results.To deal with the problem that the process may change with time,recursive PLS (RPLS) algorithm is used to update the model based on the new batch data and the old model parameters after each batch.An application to a simulated batch MMA polymerization process demonstrates the effectiveness of the proposed method.

  18. Measurement and modeling of unsaturated hydraulic conductivity

    Science.gov (United States)

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  19. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  20. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  1. A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub

    2009-07-01

    Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.

  2. Mechanical Vibrations Modeling and Measurement

    CERN Document Server

    Schmitz, Tony L

    2012-01-01

    Mechanical Vibrations:Modeling and Measurement describes essential concepts in vibration analysis of mechanical systems. It incorporates the required mathematics, experimental techniques, fundamentals of modal analysis, and beam theory into a unified framework that is written to be accessible to undergraduate students,researchers, and practicing engineers. To unify the various concepts, a single experimental platform is used throughout the text to provide experimental data and evaluation. Engineering drawings for the platform are included in an appendix. Additionally, MATLAB programming solutions are integrated into the content throughout the text. This book also: Discusses model development using frequency response function measurements Presents a clear connection between continuous beam models and finite degree of freedom models Includes MATLAB code to support numerical examples that are integrated into the text narrative Uses mathematics to support vibrations theory and emphasizes the practical significanc...

  3. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  4. Comparing predictions made by a prediction model, clinical score, and physicians: pediatric asthma exacerbations in the emergency department.

    Science.gov (United States)

    Farion, K J; Wilk, S; Michalowski, W; O'Sullivan, D; Sayyad-Shirabad, J

    2013-01-01

    Asthma exacerbations are one of the most common medical reasons for children to be brought to the hospital emergency department (ED). Various prediction models have been proposed to support diagnosis of exacerbations and evaluation of their severity. First, to evaluate prediction models constructed from data using machine learning techniques and to select the best performing model. Second, to compare predictions from the selected model with predictions from the Pediatric Respiratory Assessment Measure (PRAM) score, and predictions made by ED physicians. A two-phase study conducted in the ED of an academic pediatric hospital. In phase 1 data collected prospectively using paper forms was used to construct and evaluate five prediction models, and the best performing model was selected. In phase 2 data collected prospectively using a mobile system was used to compare the predictions of the selected prediction model with those from PRAM and ED physicians. Area under the receiver operating characteristic curve and accuracy in phase 1; accuracy, sensitivity, specificity, positive and negative predictive values in phase 2. In phase 1 prediction models were derived from a data set of 240 patients and evaluated using 10-fold cross validation. A naive Bayes (NB) model demonstrated the best performance and it was selected for phase 2. Evaluation in phase 2 was conducted on data from 82 patients. Predictions made by the NB model were less accurate than the PRAM score and physicians (accuracy of 70.7%, 73.2% and 78.0% respectively), however, according to McNemar's test it is not possible to conclude that the differences between predictions are statistically significant. Both the PRAM score and the NB model were less accurate than physicians. The NB model can handle incomplete patient data and as such may complement the PRAM score. However, it requires further research to improve its accuracy.

  5. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  6. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  7. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  8. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  9. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  10. Mathematical models for predicting indoor air quality from smoking activity.

    Science.gov (United States)

    Ott, W R

    1999-05-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.

  11. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  12. Predicting functional brain ROIs via fiber shape models.

    Science.gov (United States)

    Zhang, Tuo; Guo, Lei; Li, Kaiming; Zhu, Dajing; Cui, Guangbin; Liu, Tianming

    2011-01-01

    Study of structural and functional connectivities of the human brain has received significant interest and effort recently. A fundamental question arises when attempting to measure the structural and/or functional connectivities of specific brain networks: how to best identify possible Regions of Interests (ROIs)? In this paper, we present a novel ROI prediction framework that localizes ROIs in individual brains based on learned fiber shape models from multimodal task-based fMRI and diffusion tensor imaging (DTI) data. In the training stage, ROIs are identified as activation peaks in task-based fMRI data. Then, shape models of white matter fibers emanating from these functional ROIs are learned. In addition, ROIs' location distribution model is learned to be used as an anatomical constraint. In the prediction stage, functional ROIs are predicted in individual brains based on DTI data. The ROI prediction is formulated and solved as an energy minimization problem, in which the two learned models are used as energy terms. Our experiment results show that the average ROI prediction error is 3.45 mm, in comparison with the benchmark data provided by working memory task-based fMRI. Promising results were also obtained on the ADNI-2 longitudinal DTI dataset.

  13. Prediction of Wine Sensorial Quality by Routinely Measured Chemical Properties

    Directory of Open Access Journals (Sweden)

    Bednárová Adriána

    2014-12-01

    Full Text Available The determination of the sensorial quality of wines is of great interest for wine consumers and producers since it declares the quality in most of the cases. The sensorial assays carried out by a group of experts are time-consuming and expensive especially when dealing with large batches of wines. Therefore, an attempt was made to assess the possibility of estimating the wine sensorial quality with using routinely measured chemical descriptors as predictors. For this purpose, 131 Slovenian red wine samples of different varieties and years of production were analysed and correlation and principal component analysis were applied to find inter-relations between the studied oenological descriptors. The method of artificial neural networks (ANNs was utilised as the prediction tool for estimating overall sensorial quality of red wines. Each model was rigorously validated and sensitivity analysis was applied as a method for selecting the most important predictors. Consequently, acceptable results were obtained, when data representing only one year of production were included in the analysis. In this case, the coefficient of determination (R2 associated with training data was 0.95 and that for validation data was 0.90. When estimating sensorial quality in categorical form, 94 % and 85 % of correctly classified samples were achieved for training and validation subset, respectively.

  14. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  15. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...... of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed...... the thermodynamic model used in this Ph.D. project. A review of alternative activity coefficient models an earlier work on scale formation is provided. A guideline to the parameter estimation procedure and the number of parameters estimated in the present work are also described. The prediction of solid...

  16. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  17. Predicting medical practices using various risk attitude measures.

    Science.gov (United States)

    Massin, Sophie; Nebout, Antoine; Ventelou, Bruno

    2017-08-31

    This paper investigates the predictive power of several risk attitude measures on a series of medical practices. We elicit risk preferences on a sample of 1500 French general practitioners (GPs) using two different classes of tools: scales, which measure GPs' own perception of their willingness to take risks between 0 and 10; and lotteries, which require GPs to choose between a safe and a risky option in a series of hypothetical situations. In addition to a daily life risk scale that measures a general risk attitude, risk taking is measured in different domains for each tool: financial matters, GPs' own health, and patients' health. We take advantage of the rare opportunity to combine these multiple risk attitude measures with a series of self-reported or administratively recorded medical practices. We successively test the predictive power of our seven risk attitude measures on eleven medical practices affecting the GPs' own health or their patients' health. We find that domain-specific measures are far better predictors than the general risk attitude measure. Neither of the two classes of tools (scales or lotteries) seems to perform indisputably better than the other, except when we concentrate on the only non-declarative practice (prescription of biological tests), for which the classic money-lottery test works well. From a public health perspective, appropriate measures of willingness to take risks may be used to make a quick, but efficient, profiling of GPs and target them with personalized communications, or interventions, aimed at improving practices.

  18. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  19. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  20. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  1. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  2. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  3. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  4. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  5. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  6. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  7. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  8. Use of Information Measures and Their Approximations to Detect Predictive Gene-Gene Interaction

    Directory of Open Access Journals (Sweden)

    Jan Mielniczuk

    2017-01-01

    Full Text Available We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.

  9. Multivariate prediction of odor from pig production based on in-situ measurement of odorants

    Science.gov (United States)

    Hansen, Michael J.; Jonassen, Kristoffer E. N.; Løkke, Mette Marie; Adamsen, Anders Peter S.; Feilberg, Anders

    2016-06-01

    The aim of the present study was to estimate a prediction model for odor from pig production facilities based on measurements of odorants by Proton-Transfer-Reaction Mass spectrometry (PTR-MS). Odor measurements were performed at four different pig production facilities with and without odor abatement technologies using a newly developed mobile odor laboratory equipped with a PTR-MS for measuring odorants and an olfactometer for measuring the odor concentration by human panelists. A total of 115 odor measurements were carried out in the mobile laboratory and simultaneously air samples were collected in Nalophan bags and analyzed at accredited laboratories after 24 h. The dataset was divided into a calibration dataset containing 94 samples and a validation dataset containing 21 samples. The prediction model based on the measurements in the mobile laboratory was able to explain 74% of the variation in the odor concentration based on odorants, whereas the prediction models based on odor measurements with bag samples explained only 46-57%. This study is the first application of direct field olfactometry to livestock odor and emphasizes the importance of avoiding any bias from sample storage in studies of odor-odorant relationships. Application of the model on the validation dataset gave a high correlation between predicted and measured odor concentration (R2 = 0.77). Significant odorants in the prediction models include phenols and indoles. In conclusion, measurements of odorants on-site in pig production facilities is an alternative to dynamic olfactometry that can be applied for measuring odor from pig houses and the effects of odor abatement technologies.

  10. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    DEFF Research Database (Denmark)

    Di Angelantonio, Emanuele; Gao, Pei; Khan, Hassan

    2014-01-01

    of cardiovascular disease (CVD) risk. DESIGN, SETTING, AND PARTICIPANTS: Analysis of individual-participant data available from 73 prospective studies involving 294,998 participants without a known history of diabetes mellitus or CVD at the baseline assessment. MAIN OUTCOMES AND MEASURES: Measures of risk......,840 incident fatal and nonfatal CVD outcomes (13,237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values......IMPORTANCE: The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE: To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction...

  11. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  12. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Science.gov (United States)

    Edeling, W. N.; Cinnella, P.; Dwight, R. P.

    2014-10-01

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  13. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands)

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  14. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  15. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  16. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  17. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  18. Predicting cognitive function from clinical measures of physical function and health status in older adults.

    Directory of Open Access Journals (Sweden)

    Niousha Bolandzadeh

    Full Text Available Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies.We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1 based on baseline cognitive function, 2 based on variables consistently selected in every cross-validation loop, and 3 a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation.Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33 and the model with baseline cognitive function (7.98. Our model explained 47% of the variance in cognitive function after one year.We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.

  19. Step Prediction During Perturbed Standing Using Center Of Pressure Measurements

    Directory of Open Access Journals (Sweden)

    Milos R. Popovic

    2007-04-01

    Full Text Available The development of a sensor that can measure balance during quiet standing and predict stepping response in the event of perturbation has many clinically relevant applica- tions, including closed-loop control of a neuroprothesis for standing. This study investigated the feasibility of an algorithm that can predict in real-time when an able-bodied individual who is quietly standing will have to make a step to compensate for an external perturbation. Anterior and posterior perturbations were performed on 16 able-bodied subjects using a pul- ley system with a dropped weight. A linear relationship was found between the peak center of pressure (COP velocity and the peak COP displacement caused by the perturbation. This result suggests that one can predict when a person will have to make a step based on COP velocity measurements alone. Another important feature of this finding is that the peak COP velocity occurs considerably before the peak COP displacement. As a result, one can predict if a subject will have to make a step in response to a perturbation sufficiently ahead of the time when the subject is actually forced to make the step. The proposed instability detection algorithm will be implemented in a sensor system using insole sheets in shoes with minitur- ized pressure sensors by which the COPv can be continuously measured. The sensor system will be integrated in a closed-loop feedback system with a neuroprosthesis for standing in the near future.

  20. Measuring the suicidal mind: implicit cognition predicts suicidal behavior.

    Science.gov (United States)

    Nock, Matthew K; Park, Jennifer M; Finn, Christine T; Deliberto, Tara L; Dour, Halina J; Banaji, Mahzarin R

    2010-04-01

    Suicide is difficult to predict and prevent because people who consider killing themselves often are unwilling or unable to report their intentions. Advances in the measurement of implicit cognition provide an opportunity to test whether automatic associations of self with death can provide a behavioral marker for suicide risk. We measured implicit associations about death/suicide in 157 people seeking treatment at a psychiatric emergency department. Results confirmed that people who have attempted suicide hold a significantly stronger implicit association between death/suicide and self than do psychiatrically distressed individuals who have not attempted suicide. Moreover, the implicit association of death/suicide with self was associated with an approximately 6-fold increase in the odds of making a suicide attempt in the next 6 months, exceeding the predictive validity of known risk factors (e.g., depression, suicide-attempt history) and both patients' and clinicians' predictions. These results provide the first evidence of a behavioral marker for suicidal behavior and suggest that measures of implicit cognition may be useful for detecting and predicting sensitive clinical behaviors that are unlikely to be reported.

  1. Adaptive bandwidth measurements of importance functions for speech intelligibility prediction.

    Science.gov (United States)

    Whitmal, Nathaniel A; DeRoy, Kristina

    2011-12-01

    The Articulation Index (AI) and Speech Intelligibility Index (SII) predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the "importance function," a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. Previous work with SII predictions for hearing-impaired subjects suggests that prediction accuracy might improve if importance functions for individual subjects were available. Unfortunately, previous importance function measurements have required extensive intelligibility testing with groups of subjects, using speech processed by various fixed-bandwidth low-pass and high-pass filters. A more efficient approach appropriate to individual subjects is desired. The purpose of this study was to evaluate the feasibility of measuring importance functions for individual subjects with adaptive-bandwidth filters. In two experiments, ten subjects with normal-hearing listened to vowel-consonant-vowel (VCV) nonsense words processed by low-pass and high-pass filters whose bandwidths were varied adaptively to produce specified performance levels in accordance with the transformed up-down rules of Levitt [(1971). J. Acoust. Soc. Am. 49, 467-477]. Local linear psychometric functions were fit to resulting data and used to generate an importance function for VCV words. Results indicate that the adaptive method is reliable and efficient, and produces importance function data consistent with that of the corresponding AI/SII importance function.

  2. Modeling and predicting page-view dynamics on Wikipedia

    CERN Document Server

    Thij, Marijn ten; Laniado, David; Kaltenbrunner, Andreas

    2012-01-01

    The simplicity of producing and consuming online content makes it difficult to estimate how much attention will be devoted from Internet users to any given content. This work presents a general overview of temporal patterns in the access to content on a huge collaborative platform. We propose a model for predicting the popularity of promoted content, inspired by the analysis of the page-view dynamics on Wikipedia. Compared to previous studies, the observed popularity patterns are more complex; however, our model uses just few parameters to fully describe them. The model is validated through empirical measurements.

  3. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  4. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires

    DEFF Research Database (Denmark)

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik;

    2016-01-01

    OBJECTIVES: We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. METHODS: Two-hundred-and-fourteen blue-collar workers...... responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least......-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. RESULTS: A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary...

  5. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  6. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  7. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  8. Survival model construction guided by fit and predictive strength.

    Science.gov (United States)

    Chauvel, Cécile; O'Quigley, John

    2016-10-05

    Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.

  9. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  10. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  11. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  12. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  13. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  14. Software Measurement and Defect Prediction with Depress Extensible Framework

    Directory of Open Access Journals (Sweden)

    Madeyski Lech

    2014-12-01

    Full Text Available Context. Software data collection precedes analysis which, in turn, requires data science related skills. Software defect prediction is hardly used in industrial projects as a quality assurance and cost reduction mean. Objectives. There are many studies and several tools which help in various data analysis tasks but there is still neither an open source tool nor standardized approach. Results. We developed Defect Prediction for software systems (DePress, which is an extensible software measurement, and data integration framework which can be used for prediction purposes (e.g. defect prediction, effort prediction and software changes analysis (e.g. release notes, bug statistics, commits quality. DePress is based on the KNIME project and allows building workflows in a graphic, end-user friendly manner. Conclusions. We present main concepts, as well as the development state of the DePress framework. The results show that DePress can be used in Open Source, as well as in industrial project analysis.

  15. Measurement and prediction of residual stress in a bead-on-plate weld benchmark specimen

    Energy Technology Data Exchange (ETDEWEB)

    Ficquet, X.; Smith, D.J. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Truman, C.E. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)], E-mail: c.e.truman@bristol.ac.uk; Kingston, E.J. [Veqter Ltd, University Gate East, Park Row, Bristol BS1 5UB (United Kingdom); Dennis, R.J. [Frazer-Nash Consultancy Limited, 1 Trinity Street, College Green, Bristol BS1 5TE (United Kingdom)

    2009-01-15

    This paper presents measurements and predictions of the residual stresses generated by laying a single weld bead on a flat, austenitic stainless steel plate. The residual stress field that is created is strongly three-dimensional and is considered representative of that found in a repair weld. Through-thickness measurements are made using the deep hole drilling technique, and near-surface measurements are made using incremental centre hole drilling. Measurements are compared to predictions at the same locations made using finite element analysis incorporating an advanced, non-linear kinematic hardening model. The work was conducted as part of an European round robin exercise, coordinated as part of the NeT network. Overall, there was broad agreement between measurements and predictions, but there were notable differences.

  16. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  17. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  18. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...... developed on the basis of the original PMV/SET models and consider the influence of occupants' expectations and human adaptive functions, including the extended PMV/SET models and the adaptive PMV/SET models. The results showed that when the indoor air velocity ranged from 0 to 0.2m/s and from 0.2 to 0.8m...

  19. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  20. Reliability of blood pressure measurement and cardiovascular risk prediction

    OpenAIRE

    van der Hoeven, N.V.

    2016-01-01

    High blood pressure is one of the leading risk factors for cardiovascular disease, but difficult to reliably assess because there are many factors which can influence blood pressure including stress, exercise or illness. The first part of this thesis focuses on possible ways to improve the reliability of blood pressure measurement for proper cardiovascular risk prediction, both in and out of the doctor’s office. We show that it is possible to obtain a reliable blood pressure without the use o...

  1. Earth-To-Space Improved Model for Rain Attenuation Prediction at Ku-Band

    Directory of Open Access Journals (Sweden)

    Mandeep S.J. Singh

    2006-01-01

    Full Text Available A model for predicting rain attenuation on earth-to-space was developed by using the measurement data obtained from tropical and equatorial region. The proposed rain attenuation model uses the complete rainfall rate cumulative distribution as input data. It was shown that significant improvements in terms of prediction error over existing attenuation model obtained.

  2. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  3. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  4. QSAR Models for the Prediction of Plasma Protein Binding

    Directory of Open Access Journals (Sweden)

    Zeshan Amin

    2013-02-01

    Full Text Available Introduction: The prediction of plasma protein binding (ppb is of paramount importance in the pharmacokinetics characterization of drugs, as it causes significant changes in volume of distribution, clearance and drug half life. This study utilized Quantitative Structure – Activity Relationships (QSAR for the prediction of plasma protein binding. Methods: Protein binding values for 794 compounds were collated from literature. The data was partitioned into a training set of 662 compounds and an external validation set of 132 compounds. Physicochemical and molecular descriptors were calculated for each compound using ACD labs/logD, MOE (Chemical Computing Group and Symyx QSAR software packages. Several data mining tools were employed for the construction of models. These included stepwise regression analysis, Classification and Regression Trees (CART, Boosted trees and Random Forest. Results: Several predictive models were identified; however, one model in particular produced significantly superior prediction accuracy for the external validation set as measured using mean absolute error and correlation coefficient. The selected model was a boosted regression tree model which had the mean absolute error for training set of 13.25 and for validation set of 14.96. Conclusion: Plasma protein binding can be modeled using simple regression trees or multiple linear regressions with reasonable model accuracies. These interpretable models were able to identify the governing molecular factors for a high ppb that included hydrophobicity, van der Waals surface area parameters, and aromaticity. On the other hand, the more complicated ensemble method of boosted regression trees produced the most accurate ppb estimations for the external validation set.

  5. A neural network model for predicting aquifer water level elevations.

    Science.gov (United States)

    Coppola, Emery A; Rana, Anthony J; Poulton, Mary M; Szidarovszky, Ferenc; Uhl, Vincent W

    2005-01-01

    Artificial neural networks (ANNs) were developed for accurately predicting potentiometric surface elevations (monitoring well water level elevations) in a semiconfined glacial sand and gravel aquifer under variable state, pumping extraction, and climate conditions. ANNs "learn" the system behavior of interest by processing representative data patterns through a mathematical structure analogous to the human brain. In this study, the ANNs used the initial water level measurements, production well extractions, and climate conditions to predict the final water level elevations 30 d into the future at two monitoring wells. A sensitivity analysis was conducted with the ANNs that quantified the importance of the various input predictor variables on final water level elevations. Unlike traditional physical-based models, ANNs do not require explicit characterization of the physical system and related physical data. Accordingly, ANN predictions were made on the basis of more easily quantifiable, measured variables, rather than physical model input parameters and conditions. This study demonstrates that ANNs can provide both excellent prediction capability and valuable sensitivity analyses, which can result in more appropriate ground water management strategies.

  6. Compressor Part I: Measurement and Design Modeling

    Directory of Open Access Journals (Sweden)

    Thomas W. Bein

    1999-01-01

    method used to design the 125-ton compressor is first reviewed and some related performance curves are predicted based on a quasi-3D method. In addition to an overall performance measurement, a series of instruments were installed on the compressor to identify where the measured performance differs from the predicted performance. The measurement techniques for providing the diagnostic flow parameters are also described briefly. Part II of this paper provides predictions of flow details in the areas of the compressor where there were differences between the measured and predicted performance.

  7. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  8. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  9. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    Directory of Open Access Journals (Sweden)

    Kennedy Curtis E

    2011-10-01

    Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8

  10. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  11. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  12. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  13. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  14. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  15. Strains at the myotendinous junction predicted by a micromechanical model.

    Science.gov (United States)

    Sharafi, Bahar; Ames, Elizabeth G; Holmes, Jeffrey W; Blemker, Silvia S

    2011-11-10

    The goal of this work was to create a finite element micromechanical model of the myotendinous junction (MTJ) to examine how the structure and mechanics of the MTJ affect the local micro-scale strains experienced by muscle fibers. We validated the model through comparisons with histological longitudinal sections of muscles fixed in slack and stretched positions. The model predicted deformations of the A-bands within the fiber near the MTJ that were similar to those measured from the histological sections. We then used the model to predict the dependence of local fiber strains on activation and the mechanical properties of the endomysium. The model predicted that peak micro-scale strains increase with activation and as the compliance of the endomysium decreases. Analysis of the models revealed that, in passive stretch, local fiber strains are governed by the difference of the mechanical properties between the fibers and the endomysium. In active stretch, strain distributions are governed by the difference in cross-sectional area along the length of the tapered region of the fiber near the MTJ. The endomysium provides passive resistance that balances the active forces and prevents the tapered region of the fiber from undergoing excessive strain. These model predictions lead to the following hypotheses: (i) the increased likelihood of injury during active lengthening of muscle fibers may be due to the increase in peak strain with activation and (ii) endomysium may play a role in protecting fibers from injury by reducing the strains within the fiber at the MTJ. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill.

    Science.gov (United States)

    Raftery, Adrian E; Kárný, Miroslav; Ettler, Pavel

    2010-02-01

    We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the "correct" model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

  17. A biofilm model for prediction of pollutant transformation in sewers.

    Science.gov (United States)

    Jiang, Feng; Leung, Derek Hoi-Wai; Li, Shiyu; Chen, Guang-Hao; Okabe, Satoshi; van Loosdrecht, Mark C M

    2009-07-01

    This study developed a new sewer biofilm model to simulate the pollutant transformation and biofilm variation in sewers under aerobic, anoxic and anaerobic conditions. The biofilm model can describe the activities of heterotrophic, autotrophic, and sulfate-reducing bacteria (SRB) in the biofilm as well as the variations in biofilm thickness, the spatial profiles of SRB population and biofilm density. The model can describe dynamic biofilm growth, multiple biomass evolution and competitions among organic oxidation, denitrification, nitrification, sulfate reduction and sulfide oxidation in a heterogeneous biofilm growing in a sewer. The model has been extensively verified by three different approaches, including direct verification by measurement of the spatial concentration profiles of dissolved oxygen, nitrate, ammonia, and hydrogen sulfide in sewer biofilm. The spatial distribution profile of SRB in sewer biofilm was determined from the fluorescent in situ hybridization (FISH) images taken by a confocal laser scanning microscope (CLSM) and were predicted well by the model.

  18. Aggregate driver model to enable predictable behaviour

    Science.gov (United States)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  19. Nonlinear Model Predictive Control for Oil Reservoirs Management

    DEFF Research Database (Denmark)

    Capolei, Andrea

    . With this objective function we link the optimization problem in production optimization to the Markowitz portfolio optimization problem in finance or to the the robust design problem in topology optimization. In this study we focus on open-loop configuration, i.e. without measurement feedback. We demonstrate......, the research community is working on improving current feedback model-based optimal control technologies. The topic of this thesis is production optimization for water flooding in the secondary phase of oil recovery. We developed numerical methods for nonlinear model predictive control (NMPC) of an oil field....... Further, we studied the use of robust control strategies in both open-loop, i.e. without measurement feedback, and closed-loop, i.e. with measurement feedback, configurations. This thesis has three main original contributions: The first contribution in this thesis is to improve the computationally...

  20. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  1. A Neural Network Model for Prediction of Sound Quality

    DEFF Research Database (Denmark)

    Nielsen,, Lars Bramsløw

    error on the test set. The overall concept proved functional, but further testing with data obtained from a new rating experiment is necessary to better assess the utility of this measure. The weights in the trained neural networks were analyzed to qualitatively interpret the relation between...... obtained in subjective sound quality rating experiments based on input data from an auditory model. Various types of input data and data representations from the auditory model were used as input data for the chosen network structure, which was a three-layer perceptron. This network was trained by means...... was evaluated for two types of test set extracted from the complete data set. With a test set consisting of mixed stimuli, the prediction error was only slightly larger than the statistical error in the training data itself. Using a particular group of stimuli for the test set, there was a systematic prediction...

  2. Predicting plants -modeling traits as a function of environment

    Science.gov (United States)

    Franklin, Oskar

    2016-04-01

    A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits

  3. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  4. Predictive modeling of nanomaterial exposure effects in biological systems

    Directory of Open Access Journals (Sweden)

    Liu X

    2013-09-01

    Full Text Available Xiong Liu,1 Kaizhi Tang,1 Stacey Harper,2 Bryan Harper,2 Jeffery A Steevens,3 Roger Xu1 1Intelligent Automation, Inc., Rockville, MD, USA; 2Department of Environmental and Molecular Toxicology, School of Chemical, Biological, and Environmental Engineering, Oregon State University, Corvallis, OR, USA; 3ERDC Environmental Laboratory, Vicksburg, MS, USA Background: Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods: We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results: We found several important attributes that contribute to the 24 hours post-fertilization (hpf mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of

  5. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  6. Nucleosynthesis Predictions and High-Precision Deuterium Measurements

    Directory of Open Access Journals (Sweden)

    Signe Riemer-Sørensen

    2017-05-01

    Full Text Available Two new high-precision measurements of the deuterium abundance from absorbers along the line of sight to the quasar PKS1937–1009 were presented. The absorbers have lower neutral hydrogen column densities (N(HI ≈ 18 cm − 2 than for previous high-precision measurements, boding well for further extensions of the sample due to the plenitude of low column density absorbers. The total high-precision sample now consists of 12 measurements with a weighted average deuterium abundance of D/H = 2 . 55 ± 0 . 02 × 10 − 5 . The sample does not favour a dipole similar to the one detected for the fine structure constant. The increased precision also calls for improved nucleosynthesis predictions. For that purpose we have updated the public AlterBBN code including new reactions, updated nuclear reaction rates, and the possibility of adding new physics such as dark matter. The standard Big Bang Nucleosynthesis prediction of D/H = 2 . 456 ± 0 . 057 × 10 − 5 is consistent with the observed value within 1.7 standard deviations.

  7. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  8. IMPORTANCE OF KINETIC MEASURES IN TRAJECTORY PREDICTION WITH OPTIMAL CONTROL

    Directory of Open Access Journals (Sweden)

    Ömer GÜNDOĞDU

    2001-02-01

    Full Text Available A two-dimensional sagittally symmetric human-body model was established to simulate an optimal trajectory for manual material handling tasks. Nonlinear control techniques and genetic algorithms were utilized in the optimizations to explore optimal lifting patterns. The simulation results were then compared with the experimental data. Since the kinetic measures such as joint reactions and moments are vital parameters in injury determination, the importance of comparing kinetic measures rather than kinematical ones was emphasized.

  9. Measurements and prediction of inhaled air quality with personalized ventilation

    DEFF Research Database (Denmark)

    Cermak, Radim; Majer, M.; Melikov, Arsen Krikor

    2002-01-01

    This paper examines the performance of five different air terminal devices for personalized ventilation in relation to the quality of air inhaled by a breathing thermal manikin in a climate chamber. The personalized air was supplied either isothermally or non-isothermally (6 deg.C cooler than...... the room air) at flow rates ranging from less than 5 L/s up to 23 L/s. The air quality assessment was based on temperature measurements of the inhaled air and on the portion of the personalized air inhaled. The percentage of dissatisfied with the air quality was predicted. The results suggest...

  10. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  11. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  12. Comparison of the Beckmann model with bidirectional reflectance measurements.

    Science.gov (United States)

    Smith, T. F.; Hering, R. G.

    1973-01-01

    The Beckmann model is compared with recently reported bidirectional reflectance measurements. Comparisons revealed that monochromatic specular and bidirectional reflectance measurements are not adequately described by corresponding results evaluated from the model using mechanically acquired surface roughness parameters (rms height and rms slope). Significant improvement between measurements and predictions of the model is observed when optically acquired surface roughness parameters are used. Specular reflectance measurements for normal to intermediate polar angles of incidence are adequately represented by the model provided values of optical roughness multiplied by cosine of polar angle of incidence are less than 27 times average optical rms slope.

  13. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  14. STUDY OF RED TIDE PREDICTION MODEL FOR THE CHANGJIANG ESTUARY

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper based on field data (on red tide water quality monitoring at the Changjiang River mouth and Hutoudu mariculture area in Zhejiang Province from May to August in 1995, and May to September in 1996) presents an effective model for short term prediction of red tide in the Changjiang Estuary. The measured parameters include: depth, temperature, color diaphaneity, density, DO, COD and nutrients (PO4-P, NO2-N, NO3-N, NH4-N). The model was checked by field-test data, and compared with other related models.The model: Z=SAL-3.95 DO-2.974 PH-5.421 PO4-P is suitable for application to the Shengsi aquiculture area near the Changjiang Estuary.

  15. Mutual information model for link prediction in heterogeneous complex networks

    Science.gov (United States)

    Shakibian, Hadi; Moghadam Charkari, Nasrollah

    2017-01-01

    Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326

  16. Prediction of children's reading skills using behavioral, functional, and structural neuroimaging measures.

    Science.gov (United States)

    Hoeft, Fumiko; Ueno, Takefumi; Reiss, Allan L; Meyler, Ann; Whitfield-Gabrieli, Susan; Glover, Gary H; Keller, Timothy A; Kobayashi, Nobuhisa; Mazaika, Paul; Jo, Booil; Just, Marcel Adam; Gabrieli, John D E

    2007-06-01

    The ability to decode letters into language sounds is essential for reading success, and accurate identification of children at high risk for decoding impairment is critical for reducing the frequency and severity of reading impairment. We examined the utility of behavioral (standardized tests), and functional and structural neuroimaging measures taken with children at the beginning of a school year for predicting their decoding ability at the end of that school year. Specific patterns of brain activation during phonological processing and morphology, as revealed by voxel-based morphometry (VBM) of gray and white matter densities, predicted later decoding ability. Further, a model combining behavioral and neuroimaging measures predicted decoding outcome significantly better than either behavioral or neuroimaging models alone. Results were validated using cross-validation methods. These findings suggest that neuroimaging methods may be useful in enhancing the early identification of children at risk for poor decoding and reading skills. Copyright (c) 2007 APA, all rights reserved.

  17. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  18. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  19. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  20. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  1. COGNITIVE MODELS OF PREDICTION THE DEVELOPMENT OF A DIVERSIFIED CORPORATION

    Directory of Open Access Journals (Sweden)

    Baranovskaya T. P.

    2016-10-01

    Full Text Available The application of classical forecasting methods applied to a diversified corporation faces some certain difficulties, due to its economic nature. Unlike other businesses, diversified corporations are characterized by multidimensional arrays of data with a high degree of distortion and fragmentation of information due to the cumulative effect of the incompleteness and distortion of accounting information from the enterprises in it. Under these conditions, the applied methods and tools must have high resolution and work effectively with large databases with incomplete information, ensure the correct common comparable quantitative processing of the heterogeneous nature of the factors measured in different units. It is therefore necessary to select or develop some methods that can work with complex poorly formalized tasks. This fact substantiates the relevance of the problem of developing models, methods and tools for solving the problem of forecasting the development of diversified corporations. This is the subject of this work, which makes it relevant. The work aims to: 1 analyze the forecasting methods to justify the choice of system-cognitive analysis as one of the effective methods for the prediction of semi-structured tasks; 2 to adapt and develop the method of systemic-cognitive analysis for forecasting of dynamics of development of the corporation subject to the scenario approach; 3 to develop predictive model scenarios of changes in basic economic indicators of development of the corporation and to assess their credibility; 4 determine the analytical form of the dependence between past and future scenarios of various economic indicators; 5 develop analytical models weighing predictable scenarios, taking into account all prediction results with positive levels of similarity, to increase the level of reliability of forecasts; 6 to develop a calculation procedure to assess the strength of influence on the corporation (sensitivity of its

  2. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    Science.gov (United States)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  3. Infiltration under snow cover: Modeling approaches and predictive uncertainty

    Science.gov (United States)

    Meeks, Jessica; Moeck, Christian; Brunner, Philip; Hunkeler, Daniel

    2017-03-01

    Groundwater recharge from snowmelt represents a temporal redistribution of precipitation. This is extremely important because the rate and timing of snowpack drainage has substantial consequences to aquifer recharge patterns, which in turn affect groundwater availability throughout the rest of the year. The modeling methods developed to estimate drainage from a snowpack, which typically rely on temporally-dense point-measurements or temporally-limited spatially-dispersed calibration data, range in complexity from the simple degree-day method to more complex and physically-based energy balance approaches. While the gamut of snowmelt models are routinely used to aid in water resource management, a comparison of snowmelt models' predictive uncertainties had previously not been done. Therefore, we established a snowmelt model calibration dataset that is both temporally dense and represents the integrated snowmelt infiltration signal for the Vers Chez le Brandt research catchment, which functions as a rather unique natural lysimeter. We then evaluated the uncertainty associated with the degree-day, a modified degree-day and energy balance snowmelt model predictions using the null-space Monte Carlo approach. All three melt models underestimate total snowpack drainage, underestimate the rate of early and midwinter drainage and overestimate spring snowmelt rates. The actual rate of snowpack water loss is more constant over the course of the entire winter season than the snowmelt models would imply, indicating that mid-winter melt can contribute as significantly as springtime snowmelt to groundwater recharge in low alpine settings. Further, actual groundwater recharge could be between 2 and 31% greater than snowmelt models suggest, over the total winter season. This study shows that snowmelt model predictions can have considerable uncertainty, which may be reduced by the inclusion of more data that allows for the use of more complex approaches such as the energy balance

  4. Real time remaining useful life prediction based on nonlinear Wiener based degradation processes with measurement errors

    Institute of Scientific and Technical Information of China (English)

    唐圣金; 郭晓松; 于传强; 周志杰; 周召发; 张邦成

    2014-01-01

    Real time remaining useful life (RUL) prediction based on condition monitoring is an essential part in condition based maintenance (CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item’s individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.

  5. Measurement and prediction of broadband noise from large horizontal axis wind turbine generators

    Science.gov (United States)

    Grosveld, F. W.; Shepherd, K. P.; Hubbard, H. H.

    1995-01-01

    A method is presented for predicting the broadband noise spectra of large wind turbine generators. It includes contributions from such noise sources as the inflow turbulence to the rotor, the interactions between the turbulent boundary layers on the blade surfaces with their trailing edges and the wake due to a blunt trailing edge. The method is partly empirical and is based on acoustic measurements of large wind turbines and airfoil models. Spectra are predicted for several large machines including the proposed MOD-5B. Measured data are presented for the MOD-2, the WTS-4, the MOD-OA, and the U.S. Windpower Inc. machines. Good agreement is shown between the predicted and measured far field noise spectra.

  6. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  7. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  8. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  9. Learning Predictive Movement Models From Fabric-Mounted Wearable Sensors.

    Science.gov (United States)

    Michael, Brendan; Howard, Matthew

    2016-12-01

    The measurement and analysis of human movement for applications in clinical diagnostics or rehabilitation is often performed in a laboratory setting using static motion capture devices. A growing interest in analyzing movement in everyday environments (such as the home) has prompted the development of "wearable sensors", with the most current wearable sensors being those embedded into clothing. A major issue however with the use of these fabric-embedded sensors is the undesired effect of fabric motion artefacts corrupting movement signals. In this paper, a nonparametric method is presented for learning body movements, viewing the undesired motion as stochastic perturbations to the sensed motion, and using orthogonal regression techniques to form predictive models of the wearer's motion that eliminate these errors in the learning process. Experiments in this paper show that standard nonparametric learning techniques underperform in this fabric motion context and that improved prediction accuracy can be made by using orthogonal regression techniques. Modelling this motion artefact problem as a stochastic learning problem shows an average 77% decrease in prediction error in a body pose task using fabric-embedded sensors, compared to a kinematic model.

  10. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    Science.gov (United States)

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting.

  11. Predictive modeling of terrestrial radiation exposure from geologic materials

    Science.gov (United States)

    Haber, Daniel A.

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials in an area by creating a model using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low spatial resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas, referred to as background radiation units, homogenous in terms of K, U, and Th are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by our partner National Security Technologies, LLC (NSTec), allowing for the refinement of the technique. High resolution radiation exposure rate models have been developed for two study areas in Southern Nevada that include the alluvium on the western shore of Lake Mohave, and Government Wash north of Lake Mead; both of these areas are arid with little soil moisture and vegetation. We determined that by using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide radiation background units of alluvium, regions of homogeneous geochemistry can be defined allowing for the exposure rate to be predicted. Soil and rock samples have been collected at Government Wash and Lake Mohave as well as a third site near Cameron, Arizona. K, U, and Th concentrations of these samples have been determined using inductively coupled mass spectrometry (ICP-MS) and laboratory counting using radiation detection equipment. In addition, many sample locations also have

  12. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  13. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  14. Jet transport noise - A comparison of predicted and measured noise for ILS and two-segment approaches

    Science.gov (United States)

    White, K. C.; Bourquin, K. R.

    1974-01-01

    Centerline noise measured during standard ILS and two-segment approaches in DC-8-61 aircraft were compared with noise predicted for these procedures using an existing noise prediction technique. Measured data is considered to be in good agreement with predicted data. Ninety EPNdB sideline locations were calculated from flight data obtained during two-segment approaches and were compared with predicted 90 EPNdB contours that were computed using three different models for excess ground attenuation and a contour with no correction for ground attenuation. The contour not corrected for ground attenuation was in better agreement with the measured data.

  15. Seismoelectric fluid/porous-medium interface response model and measurements

    NARCIS (Netherlands)

    Schakel, M.D.; Smeulders, D.M.J.; Slob, E.C.; Heller, H.K.J.

    2011-01-01

    Coupled seismic and electromagnetic (EM) wave effects in fluid-saturated porous media are measured since decades. However, direct comparisons between theoretical seismoelectric wavefields and measurements are scarce. A seismoelectric full-waveform numerical model is developed, which predicts both th

  16. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  17. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  18. A neural network model for short term river flow prediction

    Science.gov (United States)

    Teschl, R.; Randeu, W. L.

    2006-07-01

    This paper presents a model using rain gauge and weather radar data to predict the runoff of a small alpine catchment in Austria. The gapless spatial coverage of the radar is important to detect small convective shower cells, but managing such a huge amount of data is a demanding task for an artificial neural network. The method described here uses statistical analysis to reduce the amount of data and find an appropriate input vector. Based on this analysis, radar measurements (pixels) representing areas requiring approximately the same time to dewater are grouped.

  19. A neural network model for short term river flow prediction

    Directory of Open Access Journals (Sweden)

    R. Teschl

    2006-01-01

    Full Text Available This paper presents a model using rain gauge and weather radar data to predict the runoff of a small alpine catchment in Austria. The gapless spatial coverage of the radar is important to detect small convective shower cells, but managing such a huge amount of data is a demanding task for an artificial neural network. The method described here uses statistical analysis to reduce the amount of data and find an appropriate input vector. Based on this analysis, radar measurements (pixels representing areas requiring approximately the same time to dewater are grouped.

  20. Purely optical navigation with model-based state prediction

    Science.gov (United States)

    Sendobry, Alexander; Graber, Thorsten; Klingauf, Uwe

    2010-10-01

    State-of-the-art Inertial Navigation Systems (INS) based on Micro-Electro-Mechanical Systems (MEMS) have a lack of precision especially in GPS denied environments like urban canyons or in pure indoor missions. The proposed Optical Navigation System (ONS) provides bias free ego-motion estimates using triple redundant sensor information. In combination with a model based state prediction our system is able to estimate velocity, position and attitude of an arbitrary aircraft. Simulating a high performance flow-field estimator the algorithm can compete with conventional low-cost INS. By using measured velocities instead of accelerations the system states drift behavior is not as distinctive as for an INS.

  1. One versus Two Breast Density Measures to Predict 5- and 10-Year Breast Cancer Risk.

    Science.gov (United States)

    Kerlikowske, Karla; Gard, Charlotte C; Sprague, Brian L; Tice, Jeffrey A; Miglioretti, Diana L

    2015-06-01

    One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined whether two BI-RADS density measures improve the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared with one measure. We included 722,654 women of ages 35 to 74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000-2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC, 0.640 vs. 0.635). Of 18.6% of women (134,404 of 722,654) who decreased density categories, 15.4% (20,741 of 134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from breast cancer risk and improves risk classification for women with risk factors and a decrease in density. A two-density model should be considered for women whose density decreases when calculating breast cancer risk. ©2015 American Association for Cancer Research.

  2. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. © 2014, CommonWealth of Australia. Groundwater © 2014, National Ground Water Association.

  3. A predictive standard model for heavy electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yifeng [Los Alamos National Laboratory; Curro, N J [UC DAVIS; Fisk, Z [UC DAVIS; Pines, D [UC DAVIS

    2010-01-01

    We propose a predictive standard model for heavy electron systems based on a detailed phenomenological two-fluid description of existing experimental data. It leads to a new phase diagram that replaces the Doniach picture, describes the emergent anomalous scaling behavior of the heavy electron (Kondo) liquid measured below the lattice coherence temperature, T*, seen by many different experimental probes, that marks the onset of collective hybridization, and enables one to obtain important information on quantum criticality and the superconducting/antiferromagnetic states at low temperatures. Because T* is {approx} J{sup 2} {rho}/2, the nearest neighbor RKKY interaction, a knowledge of the single-ion Kondo coupling, J, to the background conduction electron density of states, {rho}, makes it possible to predict Kondo liquid behavior, and to estimate its maximum superconducting transition temperature in both existing and newly discovered heavy electron families.

  4. Modeling dune response using measured and equilibrium bathymetric profiles

    Science.gov (United States)

    Fauver, Laura A.; Thompson, David M.; Sallenger, Asbury H.

    2007-01-01

    Coastal engineers typically use numerical models such as SBEACH to predict coastal change due to extreme storms. SBEACH model inputs include pre-storm profiles, wave heights and periods, and water levels. This study focuses on the sensitivity of SBEACH to the details of pre-storm bathymetry. The SBEACH model is tested with two initial conditions for bathymetry, including (1) measured bathymetry from lidar, and (2) calculated equilibrium profiles. Results show that longshore variability in the predicted erosion signal is greater over measured bathymetric profiles, due to longshore variations in initial surf zone bathymetry. Additionally, patterns in predicted erosion can be partially explained by the configuration of the inner surf zone from the shoreline to the trough, with surf zone slope accounting for 67% of the variability in predicted erosion volumes.

  5. Mixing height computation from a numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science

    2004-07-01

    Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)

  6. Multi-center MRI prediction models : Predicting sex and illness course in first episode psychosis patients

    NARCIS (Netherlands)

    Nieuwenhuis, Mireille; Schnack, Hugo G.; van Haren, Neeltje E.; Kahn, René S.; Lappin, Julia; Dazzan, Paola; Morgan, Craig; Reinders, Antje A.; Gutierrez-Tordesillas, Diana; Gutierrez-Tordesillas, Diana; Roiz-Santiañez, Roberto; Crespo-Facorro, Benedicto; Schaufelberger, Maristela S.; Rosa, Pedro G.; Zanetti, Marcus V.; Busatto, Geraldo F.; McGorry, Patrick D.; Velakoulis, Dennis; Pantelis, Christos; Wood, Stephen J.; Mourao-Miranda, Janaina; Mourao-Miranda, Janaina; Dazzan, Paola; Crespo-Facorro, Benedicto

    2017-01-01

    Structural Magnetic Resonance Imaging (MRI) studies have attempted to use brain measures obtained at the first-episode of psychosis to predict subsequent outcome, with inconsistent results. Thus, there is a real need to validate the utility of brain measures in the prediction of outcome using large

  7. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  8. A Predictive Model of Cell Traction Forces Based on Cell Geometry

    OpenAIRE

    Lemmon, Christopher A.; Romer, Lewis H

    2010-01-01

    Recent work has indicated that the shape and size of a cell can influence how a cell spreads, develops focal adhesions, and exerts forces on the substrate. However, it is unclear how cell shape regulates these events. Here we present a computational model that uses cell shape to predict the magnitude and direction of forces generated by cells. The predicted results are compared to experimentally measured traction forces, and show that the model can predict traction force direction, relative m...

  9. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  10. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences

    Energy Technology Data Exchange (ETDEWEB)

    Kendall, G.M. [University of Oxford, Cancer Epidemiology Unit, Oxford (United Kingdom); Wakeford, R. [University of Manchester, Centre for Occupational and Environmental Health, Institute of Population Health, Manchester (United Kingdom); Athanson, M. [University of Oxford, Bodleian Library, Oxford (United Kingdom); Vincent, T.J. [University of Oxford, Childhood Cancer Research Group, Oxford (United Kingdom); Carter, E.J. [University of Worcester, Earth Heritage Trust, Geological Records Centre, Henwick Grove, Worcester (United Kingdom); McColl, N.P. [Public Health England, Centre for Radiation, Chemical and Environmental Hazards, Chilton, Didcot, Oxon (United Kingdom); Little, M.P. [National Cancer Institute, DHHS, NIH, Radiation Epidemiology Branch, Division of Cancer Epidemiology and Genetics, Bethesda, MD (United States)

    2016-03-15

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matern correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matern model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matern model. (orig.)

  11. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  12. Connectivity network measures predict volumetric atrophy in mild cognitive impairment.

    Science.gov (United States)

    Nir, Talia M; Jahanshad, Neda; Toga, Arthur W; Bernstein, Matt A; Jack, Clifford R; Weiner, Michael W; Thompson, Paul M

    2015-01-01

    Alzheimer's disease (AD) is characterized by cortical atrophy and disrupted anatomic connectivity, and leads to abnormal interactions between neural systems. Diffusion-weighted imaging (DWI) and graph theory can be used to evaluate major brain networks and detect signs of a breakdown in network connectivity. In a longitudinal study using both DWI and standard magnetic resonance imaging (MRI), we assessed baseline white-matter connectivity patterns in 30 subjects with mild cognitive impairment (MCI, mean age 71.8 ± 7.5 years, 18 males and 12 females) from the Alzheimer's Disease Neuroimaging Initiative. Using both standard MRI-based cortical parcellations and whole-brain tractography, we computed baseline connectivity maps from which we calculated global "small-world" architecture measures, including mean clustering coefficient and characteristic path length. We evaluated whether these baseline network measures predicted future volumetric brain atrophy in MCI subjects, who are at risk for developing AD, as determined by 3-dimensional Jacobian "expansion factor maps" between baseline and 6-month follow-up anatomic scans. This study suggests that DWI-based network measures may be a novel predictor of AD progression.

  13. Predicting the Probability of Lightning Occurrence with Generalized Additive Models

    Science.gov (United States)

    Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim

    2017-04-01

    This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.

  14. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  15. Study of indoor radon distribution using measurements and CFD modeling.

    Science.gov (United States)

    Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K

    2014-10-01

    Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment.

  16. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  17. Prediction of absorption coefficients by pulsed laser induced photoacoustic measurements.

    Science.gov (United States)

    Priya, Mallika; Satish Rao, B S; Ray, Satadru; Mahato, K K

    2014-06-05

    In the current study, a pulsed laser induced photoacoustic spectroscopy setup was designed and developed, aiming its application in clinical diagnostics. The setup was optimized with carbon black samples in water and with various tryptophan concentrations at 281nm excitations. The sensitivity of the setup was estimated by determining minimum detectable concentration of tryptophan in water at the same excitation, and was found to be 0.035mM. The photoacoustic experiments were also performed with various tryptophan concentrations at 281nm excitation for predicting optical absorption coefficients in them and for comparing the outcomes with the spectrophotometrically-determined absorption coefficients for the same samples. Absorption coefficients for a few serum samples, obtained from some healthy female volunteers, were also determined through photoacoustic and spectrophotometric measurements at the same excitations, which showed good agreement between them, indicating its clinical implications.

  18. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  19. Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations

    Science.gov (United States)

    McKenna, S. A.; Hart, D. B.

    2005-12-01

    Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination

  20. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  1. Predicting the effects of measures to reduce eutrophication in surface water in rural areas - a case study

    NARCIS (Netherlands)

    Hendriks, R.F.A.; Kolk, van der J.W.H.

    1995-01-01

    The effectiveness of measures to reduce nutrient concentrations in surface water was predicted by a combination of a nutrient leaching model for groundwater and a nutrient simulation model for surface water. Scenarios were formulated based on several measures. Different combinations of drainage

  2. Experimental and Modeling Studies on the Prediction of Gas Hydrate Formation

    Directory of Open Access Journals (Sweden)

    Jian-Yi Liu

    2015-01-01

    Full Text Available On the base of some kinetics model analysis and kinetic observation of hydrate formation process, a new prediction model of gas hydrate formation is proposed. The analysis of the present model shows that the formation of gas hydrate not only relevant with gas composition and free water content but also relevant with temperature and pressure. Through contrast experiment, the predicted result of the new prediction method of gas hydrate crystallization kinetics is close to measured result, it means that the prediction method can reflect the hydrate crystallization accurately.

  3. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  4. Integrated hydro-bacterial modelling for predicting bathing water quality

    Science.gov (United States)

    Huang, Guoxian; Falconer, Roger A.; Lin, Binliang

    2017-03-01

    In recent years health risks associated with the non-compliance of bathing water quality have received increasing worldwide attention. However, it is particularly challenging to establish the source of any non-compliance, due to the complex nature of the source of faecal indicator organisms, and the fate and delivery processes and scarcity of field measured data in many catchments and estuaries. In the current study an integrated hydro-bacterial model, linking a catchment, 1-D model and 2-D model were integrated to simulate the adsorption-desorption processes of faecal bacteria to and from sediment particles in river, estuarine and coastal waters, respectively. The model was then validated using hydrodynamic, sediment and faecal bacteria concentration data, measured in 2012, in the Ribble river and estuary, and along the Fylde coast, UK. Particular emphasis has been placed on the mechanism of faecal bacteria transport and decay through the deposition and resuspension of suspended sediments. The results showed that by coupling the E.coli concentration with the sediment transport processes, the accuracy of the predicted E.coli levels was improved. A series of scenario runs were then carried out to investigate the impacts of different management scenarios on the E.coli concentration levels in the coastal bathing water sites around Liverpool Bay, UK. The model results show that the level of compliance with the new EU bathing water standards can be improved significantly by extending outfalls and/or reducing urban sources by typically 50%.

  5. Using dynamical uncertainty models estimating uncertainty bounds on power plant performance prediction

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.

    2007-01-01

    Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models of th...... models, is applied to two different sets of measured plant data. The computed uncertainty bounds cover the measured plant output, while the nominal prediction is outside these uncertainty bounds for some samples in these examples.  ......Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models...... of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...

  6. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  7. Frequency and Angular Resolution for Measuring, Presenting and Predicting Loudspeaker Polar Data

    DEFF Research Database (Denmark)

    Staffeldt, Henrik; Seidel, Felicity

    1996-01-01

    The spherical polar loudspeaker data available today are usually measured with such a coarse resolution that only rough estimates of the performance of sound systems can be predicted by applying these data. Complex, spherical polar data with higher angular and frequency resolutions than used today...... measurement principles and systems, in terms of specific levels of accuracy, are also discussed. The presented material consists of research the authors have done for the AES Standards Committee SC-04-03, working group on loudspeaker modeling and measurement, toward a goal set by that working group...

  8. Measurement and Modeling: Infectious Disease Modeling

    NARCIS (Netherlands)

    Kretzschmar, MEE

    2016-01-01

    After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th

  9. Measuring psychosocial variables that predict older persons' oral health behaviour.

    Science.gov (United States)

    Kiyak, H A

    1996-12-01

    The importance of recognising psychosocial characteristics of older people that influence their oral health behaviours and the potential success of dental procedures is discussed. Three variables and instruments developed and tested by the author and colleagues are presented. A measure of perceived importance of oral health behaviours has been found to be a significant predictor of dental service utilization in three studies. Self-efficacy regarding oral health has been found to be lower than self-efficacy regarding general health and medication use among older adults, especially among non-Western ethnic minorities. The significance of self-efficacy for predicting changes in caries and periodontal disease is described. Finally, a measure of expectations regarding specific dental procedures has been used with older people undergoing implant therapy. Studies with this instrument reveal that patients have concerns about the procedure far different than those focused on by dental providers. All three instruments can be used in clinical practice as a means of understanding patients' values, perceived oral health abilities, and expectations from dental care. These instruments can enhance dentist-patient rapport and improve the chances of successful dental outcomes for older patients.

  10. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  11. Prediction of insulin resistance with anthropometric measures: lessons from a large adolescent population

    Directory of Open Access Journals (Sweden)

    Wedin WK

    2012-07-01

    Full Text Available William K Wedin,1 Lizmer Diaz-Gimenez,1 Antonio J Convit1,21Department of Psychiatry, NYU School of Medicine, New York, NY, USA; 2Nathan Kline Institute, Orangeburg, NY, USAObjective: The aim of this study was to describe the minimum number of anthropometric measures that will optimally predict insulin resistance (IR and to characterize the utility of these measures among obese and nonobese adolescents.Research design and methods: Six anthropometric measures (selected from three categories: central adiposity, weight, and body composition were measured from 1298 adolescents attending two New York City public high schools. Body composition was determined by bioelectric impedance analysis (BIA. The homeostatic model assessment of IR (HOMA-IR, based on fasting glucose and insulin concentrations, was used to estimate IR. Stepwise linear regression analyses were performed to predict HOMA-IR based on the six selected measures, while controlling for age.Results: The stepwise regression retained both waist circumference (WC and percentage of body fat (BF%. Notably, BMI was not retained. WC was a stronger predictor of HOMA-IR than BMI was. A regression model using solely WC performed best among the obese II group, while a model using solely BF% performed best among the lean group. Receiver operator characteristic curves showed the WC and BF% model to be more sensitive in detecting IR than BMI, but with less specificity.Conclusion: WC combined with BF% was the best predictor of HOMA-IR. This finding can be attributed partly to the ability of BF% to model HOMA-IR among leaner participants and to the ability of WC to model HOMA-IR among participants who are more obese. BMI was comparatively weak in predicting IR, suggesting that assessments that are more comprehensive and include body composition analysis could increase detection of IR during adolescence, especially among those who are lean, yet insulin-resistant.Keywords: BMI, bioelectrical impedance

  12. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  13. Advancing viral RNA structure prediction: measuring the thermodynamics of pyrimidine-rich internal loops.

    Science.gov (United States)

    Phan, Andy; Mailey, Katherine; Sakai, Jessica; Gu, Xiaobo; Schroeder, Susan J

    2017-02-17

    Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased towards small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with more than 6 nucleotides that occur frequently in viral RNA. This paper presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Protonated cytosine and uracil base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with more than 6 nucleotides are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2x2 have been measured (Mathews 2004). These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites.

  14. Predicting biological system objectives de novo from internal state measurements

    Directory of Open Access Journals (Sweden)

    Maranas Costas D

    2008-01-01

    Full Text Available Abstract Background Optimization theory has been applied to complex biological systems to interrogate network properties and develop and refine metabolic engineering strategies. For example, methods are emerging to engineer cells to optimally produce byproducts of commercial value, such as bioethanol, as well as molecular compounds for disease therapy. Flux balance analysis (FBA is an optimization framework that aids in this interrogation by generating predictions of optimal flux distributions in cellular networks. Critical features of FBA are the definition of a biologically relevant objective function (e.g., maximizing the rate of synthesis of biomass, a unit of measurement of cellular growth and the subsequent application of linear programming (LP to identify fluxes through a reaction network. Despite the success of FBA, a central remaining challenge is the definition of a network objective with biological meaning. Results We present a novel method called Biological Objective Solution Search (BOSS for the inference of an objective function of a biological system from its underlying network stoichiometry as well as experimentally-measured state variables. Specifically, BOSS identifies a system objective by defining a putative stoichiometric "objective reaction," adding this reaction to the existing set of stoichiometric constraints arising from known interactions within a network, and maximizing the putative objective reaction via LP, all the while minimizing the difference between the resultant in silico flux distribution and available experimental (e.g., isotopomer flux data. This new approach allows for discovery of objectives with previously unknown stoichiometry, thus extending the biological relevance from earlier methods. We verify our approach on the well-characterized central metabolic network of Saccharomyces cerevisiae. Conclusion We illustrate how BOSS offers insight into the functional organization of biochemical networks

  15. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  16. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  17. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  18. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  19. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  20. A Predictive Model for Yeast Cell Polarization in Pheromone Gradients.

    Science.gov (United States)

    Muller, Nicolas; Piel, Matthieu; Calvez, Vincent; Voituriez, Raphaël; Gonçalves-Sá, Joana; Guo, Chin-Lin; Jiang, Xingyu; Murray, Andrew; Meunier, Nicolas

    2016-04-01

    Budding yeast cells exist in two mating types, a and α, which use peptide pheromones to communicate with each other during mating. Mating depends on the ability of cells to polarize up pheromone gradients, but cells also respond to spatially uniform fields of pheromone by polarizing along a single axis. We used quantitative measurements of the response of a cells to α-factor to produce a predictive model of yeast polarization towards a pheromone gradient. We found that cells make a sharp transition between budding cycles and mating induced polarization and that they detect pheromone gradients accurately only over a narrow range of pheromone concentrations corresponding to this transition. We fit all the parameters of the mathematical model by using quantitative data on spontaneous polarization in uniform pheromone concentration. Once these parameters have been computed, and without any further fit, our model quantitatively predicts the yeast cell response to pheromone gradient providing an important step toward understanding how cells communicate with each other.

  1. Balancing model complexity and measurements in hydrology

    Science.gov (United States)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  2. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  3. Link prediction measures considering different neighbors’ effects and application in social networks

    Science.gov (United States)

    Luo, Peng; Wu, Chong; Li, Yongli

    Link prediction measures have been attracted particular attention in the field of mathematical physics. In this paper, we consider the different effects of neighbors in link prediction and focus on four different situations: only consider the individual’s own effects; consider the effects of individual, neighbors and neighbors’ neighbors; consider the effects of individual, neighbors, neighbors’ neighbors, neighbors’ neighbors’ neighbors and neighbors’ neighbors’ neighbors’ neighbors; consider the whole network participants’ effects. Then, according to the four situations, we present our link prediction models which also take the effects of social characteristics into consideration. An artificial network is adopted to illustrate the parameter estimation based on logistic regression. Furthermore, we compare our methods with the some other link prediction methods (LPMs) to examine the validity of our proposed model in online social networks. The results show the superior of our proposed link prediction methods compared with others. In the application part, our models are applied to study the social network evolution and used to recommend friends and cooperators in social networks.

  4. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  5. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  6. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  7. A critical review of statistical calibration/prediction models handling data inconsistency and model inadequacy

    CERN Document Server

    Pernot, Pascal

    2016-01-01

    Inference of physical parameters from reference data is a well studied problem with many intricacies (inconsistent sets of data due to experimental systematic errors, approximate physical models...). The complexity is further increased when the inferred parameters are used to make predictions (virtual measurements) because parameters uncertainty has to be estimated in addition to parameters best value. The literature is rich in statistical models for the calibration/prediction problem, each having benefits and limitations. We review and evaluate standard and state-of-the-art statistical models in a common bayesian framework, and test them on synthetic and real datasets of temperature-dependent viscosity for the calibration of Lennard-Jones parameters of a Chapman-Enskog model.

  8. Active diagnosis of hybrid systems - A model predictive approach

    OpenAIRE

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...

  9. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  10. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  11. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  12. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  13. Hybrid Wavelet-Postfix-GP Model for Rainfall Prediction of Anand Region of India

    Directory of Open Access Journals (Sweden)

    Vipul K. Dabhi

    2014-01-01

    Full Text Available An accurate prediction of rainfall is crucial for national economy and management of water resources. The variability of rainfall in both time and space makes the rainfall prediction a challenging task. The present work investigates the applicability of a hybrid wavelet-postfix-GP model for daily rainfall prediction of Anand region using meteorological variables. The wavelet analysis is used as a data preprocessing technique to remove the stochastic (noise component from the original time series of each meteorological variable. The Postfix-GP, a GP variant, and ANN are then employed to develop models for rainfall using newly generated subseries of meteorological variables. The developed models are then used for rainfall prediction. The out-of-sample prediction performance of Postfix-GP and ANN models is compared using statistical measures. The results are comparable and suggest that Postfix-GP could be explored as an alternative tool for rainfall prediction.

  14. Efficient Control of Nonlinear Noise-Corrupted Systems Using a Novel Model Predictive Control Framework

    OpenAIRE

    Weissel, Florian; Huber, Marco F.; Hanebeck, Uwe D.

    2007-01-01

    Model identification and measurement acquisition is always to some degree uncertain. Therefore, a framework for Nonlinear Model Predictive Control (NMPC) is proposed that explicitly considers the noise influence on nonlinear dynamic systems with continuous state spaces and a finite set of control inputs in order to significantly increase the control quality. Integral parts of NMPC are the prediction of system states over a finite horizon as well as the problem specific modeling of reward func...

  15. A heat transport benchmark problem for predicting the impact of measurements on experimental facility design

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, Dan Gabriel, E-mail: cacuci@cec.sc.edu

    2016-04-15

    Highlights: • Predictive Modeling of Coupled Multi-Physics Systems (PM-CMPS) methodology is used. • Impact of measurements for reducing predicted uncertainties is highlighted. • Presented thermal-hydraulics benchmark illustrates generally applicable concepts. - Abstract: This work presents the application of the “Predictive Modeling of Coupled Multi-Physics Systems” (PM-CMPS) methodology conceived by Cacuci (2014) to a “test-section benchmark” problem in order to quantify the impact of measurements for reducing the uncertainties in the conceptual design of a proposed experimental facility aimed at investigating the thermal-hydraulics characteristics expected in the conceptual design of the G4M reactor (GEN4ENERGY, 2012). This “test-section benchmark” simulates the conditions experienced by the hottest rod within the conceptual design of the facility's test section, modeling the steady-state conduction in a rod heated internally by a cosinus-like heat source, as typically encountered in nuclear reactors, and cooled by forced convection to a surrounding coolant flowing along the rod. The PM-CMPS methodology constructs a prior distribution using all of the available computational and experimental information, by relying on the maximum entropy principle to maximize the impact of all available information and minimize the impact of ignorance. The PM-CMPS methodology then constructs the posterior distribution using Bayes’ theorem, and subsequently evaluates it via saddle-point methods to obtain explicit formulas for the predicted optimal temperature distributions and predicted optimal values for the thermal-hydraulics model parameters that characterized the test-section benchmark. In addition, the PM-CMPS methodology also yields reduced uncertainties for both the model parameters and responses. As a general rule, it is important to measure a quantity consistently with, and more accurately than, the information extant prior to the measurement. For

  16. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  17. Predictions and measurements of isothermal flowfields in axisymmetric combustor geometries. Ph.D. Thesis. Final Report

    Science.gov (United States)

    Rhodes, D. L.; Lilley, D. G.

    1985-01-01

    Numerical predictions, flow visualization experiments and time-mean velocity measurements were obtained for six basic nonreacting flowfields (with inlet swirl vane angles of 0 (swirler removed), 45 and 70 degrees and sidewall expansion angles of 90 and 45 degrees) in an idealized axisymmetric combustor geometry. A flowfield prediction computer program was developed which solves appropriate finite difference equations including a conventional two equation k-epsilon eddy viscosity turbulence model. The wall functions employed were derived from previous swirling flow measurements, and the stairstep approximation was employed to represent the sloping wall at the inlet to the test chamber. Recirculation region boundaries have been sketched from the entire flow visualization photograph collection. Tufts, smoke, and neutrally buoyant helium filled soap bubbles were employed as flow tracers. A five hole pitot probe was utilized to measure the axial, radial, and swirl time mean velocity components.

  18. Predicting dynamic knee joint load with clinical measures in people with medial knee osteoarthritis.

    Science.gov (United States)

    Hunt, Michael A; Bennell, Kim L

    2011-08-01

    Knee joint loading, as measured by the knee adduction moment (KAM), has been implicated in the pathogenesis of knee osteoarthritis (OA). Given that the KAM can only currently be accurately measured in the laboratory setting with sophisticated and expensive equipment, its utility in the clinical setting is limited. This study aimed to determine the ability of a combination of four clinical measures to predict KAM values. Three-dimensional motion analysis was used to calculate the peak KAM at a self-selected walking speed in 47 consecutive individuals with medial compartment knee OA and varus malalignment. Clinical predictors included: body mass; tibial angle measured using an inclinometer; walking speed; and visually observed trunk lean toward the affected limb during the stance phase of walking. Multiple linear regression was performed to predict KAM magnitudes using the four clinical measures. A regression model including body mass (41% explained variance), tibial angle (17% explained variance), and walking speed (9% explained variance) explained a total of 67% of variance in the peak KAM. Our study demonstrates that a set of measures easily obtained in the clinical setting (body mass, tibial alignment, and walking speed) can help predict the KAM in people with medial knee OA. Identifying those patients who are more likely to experience high medial knee loads could assist clinicians in deciding whether load-modifying interventions may be appropriate for patients, whilst repeated assessment of joint load could provide a mechanism to monitor disease progression or success of treatment.

  19. Uncertainties in Predicting Rice Yield by Current Crop Models Under a Wide Range of Climatic Conditions

    Science.gov (United States)

    Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon; Gaydon, Donald; Marcaida, Manuel, III; Nakagawa, Hiroshi; Oriol, Philippe; Ruane, Alex C.; Ruget, Francoise; Singh, Balwinder; Singh, Upendra; Tang, Liang; Tao, Fulu; Wilkens, Paul; Yoshida, Hiroe; Zhang, Zhao; Bouman, Bas

    2014-01-01

    Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.

  20. Improvement of NCEP Numerical Weather Prediction with Use of Satellite Land Measurements

    Science.gov (United States)

    Zheng, W.; Ek, M. B.; Wei, H.; Meng, J.; Dong, J.; Wu, Y.; Zhan, X.; Liu, J.; Jiang, Z.; Vargas, M.

    2014-12-01

    Over the past two decades, satellite measurements are being increasingly used in weather and climate prediction systems and have made a considerable progress in accurate numerical weather and climate predictions. However, it is noticed that the utilization of satellite measurements over land is far less than over ocean, because of the high land surface inhomogeneity and the high emissivity variabilities in time and space of surface characteristics. In this presentation, we will discuss the application efforts of satellite land observations in the National Centers for Environmental Prediction (NCEP) operational Global Forecast System (GFS) in order to improve the global numerical weather prediction (NWP). Our study focuses on use of satellite data sets such as vegetation type and green vegetation fraction, assimilation of satellite products such as soil moisture retrieval, and direct radiance assimilation. Global soil moisture data products could be used for initialization of soil moisture state variables in numerical weather, climate and hydrological forecast models. A global Soil Moisture Operational Product System (SMOPS) has been developed at NOAA-NESDIS to continuously provide global soil moisture data products to meet NOAA-NCEP's soil moisture data needs. The impact of the soil moisture data products on numerical weather forecast is assessed using the NCEP GFS in which the Ensemble Kalman Filter (EnKF) data assimilation algorithm has been implemented. In terms of radiance assimilation, satellite radiance measurements in various spectral channels are assimilated through the JCSDA Community Radiative Transfer Model (CRTM) on the NCEP Gridpoint Statistical Interpolation (GSI) system, which requires the CRTM to calculate model brightness temperature (Tb) with input of model atmosphere profiles and surface parameters. Particularly, for surface sensitive channels (window channels), Tb largely depends on surface parameters such as land surface skin temperature, soil