WorldWideScience

Sample records for models correctly predict

  1. Robust recurrent neural network modeling for software fault detection and correction prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Q.P. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: g0305835@nus.edu.sg; Xie, M. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Ng, S.H. [Quality and Innovation Research Centre, Department of Industrial and Systems Engineering, National University of Singapore, Singapore 119260 (Singapore)]. E-mail: isensh@nus.edu.sg; Levitin, G. [Israel Electric Corporation, Reliability and Equipment Department, R and D Division, Aaifa 31000 (Israel)]. E-mail: levitin@iec.co.il

    2007-03-15

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.

  2. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  3. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  4. Multivariate Bias Correction Procedures for Improving Water Quality Predictions using Mechanistic Models

    Science.gov (United States)

    Libera, D.; Arumugam, S.

    2015-12-01

    Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.

  5. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    Science.gov (United States)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  6. Renormalon Model Predictions for Power-Corrections to Flavour Singlet Deep Inelastic Structure Functions

    CERN Document Server

    Stein, E; Mankiewicz, L; Schäfer, A

    1998-01-01

    We analyze power corrections to flavour singlet deep inelastic scattering structure functions in the framework of the infrared renormalon model. Our calculations, together with previous results for the non-singlet contribution, allow to model the x-dependence of higher twist corrections to F_2, F_L and g_1 in the whole x domain.

  7. Antibody modeling using the prediction of immunoglobulin structure (PIGS) web server [corrected].

    Science.gov (United States)

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-12-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  8. Semi-empirical correction of ab initio harmonic properties by scaling factors: a validated uncertainty model for calibration and prediction

    CERN Document Server

    Pernot, Pascal

    2010-01-01

    Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio harmonic properties (e.g. vibrational frequencies and zero-point energies). A particular attention is devoted to the evaluation of scaling factor uncertainty, and to its effect on the accuracy of scaled properties. We argue that in most cases of interest the standard calibration model is not statistically valid, in the sense that it is not able to fit experimental calibration data within their uncertainty limits. This impairs any attempt to use the results of the standard model for uncertainty analysis and/or uncertainty propagation. We propose to include a stochastic term in the calibration model to account for model inadequacy. This new model is validated in the Bayesian Model Calibration framework. We provide explicit formulae for prediction uncertainty in typical limit cases: large and small calibration sets of data with negligible measurement uncertainty, and datasets with la...

  9. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  10. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  11. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    Science.gov (United States)

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  12. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  13. Methods to correct and compute confidence and prediction intervals of models neglecting sub-parameterization heterogeneity - From the ideal toward practice

    Science.gov (United States)

    Christensen, Steen

    2017-02-01

    This paper derives and tests methods to correct regression-based confidence and prediction intervals for groundwater models that neglect sub-parameterization heterogeneity within the hydraulic property fields of the groundwater system. Several levels of knowledge and uncertainty about the system are considered. It is shown by a two-dimensional groundwater flow example that when reliable probabilistic models are available for the property fields, the corrected confidence and prediction intervals are nearly accurate; when the probabilistic models must be suggested from subjective judgment, the corrected confidence intervals are likely to be much more accurate than their uncorrected counterparts; when no probabilistic information is available then conservative bound values can be used to correct the intervals but they are likely to be very wide. The paper also shows how confidence and prediction intervals can be computed and corrected when the weights applied to the data are estimated as part of the regression. It is demonstrated that in this case it cannot be guaranteed that applying the conservative bound values will lead to conservative confidence and prediction intervals. Finally, it is demonstrated by the two-dimensional flow example that the accuracy of the corrected confidence and prediction intervals deteriorates for very large covariance of the log-transmissivity field, and particularly when the weight matrix differs from the inverse total error covariance matrix. It is argued that such deterioration is less likely to happen for three-dimensional groundwater flow systems.

  14. Innovation in prediction planning for anterior open bite correction.

    Science.gov (United States)

    Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf

    2015-05-01

    This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.

  15. A predictive model of suitability for minimally invasive parathyroid surgery in the treatment of primary hyperparathyroidism [corrected].

    LENUS (Irish Health Repository)

    Kavanagh, Dara O

    2012-05-01

    Improved preoperative localizing studies have facilitated minimally invasive approaches in the treatment of primary hyperparathyroidism (PHPT). Success depends on the ability to reliably select patients who have PHPT due to single-gland disease. We propose a model encompassing preoperative clinical, biochemical, and imaging studies to predict a patient\\'s suitability for minimally invasive surgery.

  16. Characterizing bias correction uncertainty in wheat yield predictions

    Science.gov (United States)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield

  17. Statistical corrections to numerical predictions. IV. [of weather

    Science.gov (United States)

    Schemm, Jae-Kyung; Faller, Alan J.

    1986-01-01

    The National Meteorological Center Barotropic-Mesh Model has been used to test a statistical correction procedure, designated as M-II, that was developed in Schemm et al. (1981). In the present application, statistical corrections at 12 h resulted in significant reductions of the mean-square errors of both vorticity and the Laplacian of thickness. Predictions to 48 h demonstrated the feasibility of applying corrections at every 12 h in extended forecasts. In addition to these improvements, however, the statistical corrections resulted in a shift of error from smaller to larger-scale motions, improving the smallest scales dramatically but deteriorating the largest scales. This effect is shown to be a consequence of randomization of the residual errors by the regression equations and can be corrected by spatially high-pass filtering the field of corrections before they are applied.

  18. An Empirical Correction Method for Improving off-Axes Response Prediction in Component Type Flight Mechanics Helicopter Models

    Science.gov (United States)

    Mansur, M. Hossein; Tischler, Mark B.

    1997-01-01

    Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.

  19. A Correction Method Suitable for Dynamical Seasonal Prediction

    Institute of Scientific and Technical Information of China (English)

    CHEN Hong; LIN Zhaohui

    2006-01-01

    Based on the hindcast results of summer rainfall anomalies over China for the period 1981-2000 by the Dynamical Climate Prediction System (IAP-DCP) developed by the Institute of Atmospheric Physics,a correction method that can account for the dependence of model's systematic biases on SST anomalies is proposed. It is shown that this correction method can improve the hindcast skill of the IAP-DCP for summer rainfall anomalies over China, especially in western China and southeast China, which may imply its potential application to real-time seasonal prediction.

  20. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman......, Phys. Rev. E 83, 016107 (2011)] incorporates a node degree correction to model degree heterogeneity within each group. Although this demonstrably leads to better performance on several networks, it is not obvious whether modeling node degree is always appropriate or necessary. We formulate the degree...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  1. An exponential correction to Starobinsky's inflationary model

    CERN Document Server

    Fabris, Júlio C; Piattella, Oliver F

    2016-01-01

    We analyse $f(R)$ theories of gravity from a dynamical system perspective, showing how the $R^2$ correction in Starobinsky's model plays a crucial role from the viewpoint of the inflationary paradigm. Then, we propose a modification of Starobinsky's model by adding an exponential term in the $f(R)$ Lagrangian. We show how this modification could allow to test the robustness of the model by means of the predictions on the scalar spectral index $n_s$.

  2. Diagnosis and predictive maintenance of diesel engines based on correction and normalization models for oil analysis; Diagnostico y mantenimiento predictivo de motores diesel basado en modelos de correcion y normalizacion del analisis del aceite

    Energy Technology Data Exchange (ETDEWEB)

    Espinoza, Henry [Universidad de Oriente, Puerto la Cruz (Venezuela). Escuela de Ingenieria y Ciencias Aplicadas. Dept. de Mecanica]. E-mail: hespinoz@dino.conicit.ve

    1995-07-01

    A predictive and diagnostic system for diesel engine is presented. The system is fundamented on correction and normalization of metallic concentrations in the oil. The correction was made by using mathematical models considering: filter effect and oil added. The normalization was accomplished by calculation of the equivalent concentration for a fixed size metallurgically normalized engine, having a constant capacity carter. The system predicts both: the time at witch a critical wearing failure appears, and the oil residual life. (author)

  3. Matrix Models and Gravitational Corrections

    CERN Document Server

    Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine

    2002-01-01

    We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.

  4. Wall Correction Model for Wind Tunnels with Open Test Section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2004-01-01

    In th paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, that is based on a onedimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. Generally......, the corrections from the model are in very good agreement with the CFD computaions, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections. Keywords: Wind tunnel correction, momentum theory...

  5. Statistics of predictions with missing higher order corrections

    CERN Document Server

    Berthier, Laure

    2016-01-01

    Effective operators have been used extensively to understand small deviations from the Standard Model in the search for new physics. So far there has been no general method to fit for small parameters when higher order corrections in these parameters are present but unknown. We present a new technique that solves this problem, allowing for an exact p-value calculation under the assumption that higher order theoretical contributions can be treated as gaussian distributed random variables. The method we propose is general, and may be used in the analysis of any perturbative theoretical prediction, ie.~truncated power series. We illustrate this new method by performing a fit of the Standard Model Effective Field Theory parameters, which include eg.~anomalous gauge and four-fermion couplings.

  6. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  7. PPA BASED PREDICTION-CORRECTION METHODS FOR MONOTONE VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    He Bingsheng; Jiang Jianlin; Qian Maijian; Xu Ya

    2005-01-01

    In this paper we study the proximal point algorithm (PPA) based predictioncorrection (PC) methods for monotone variational inequalities. Each iteration of these methods consists of a prediction and a correction. The predictors are produced by inexact PPA steps. The new iterates are then updated by a correction using the PPA formula. We present two profit functions which serve two purposes: First we show that the profit functions are tight lower bounds of the improvements obtained in each iteration. Based on this conclusion we obtain the convergence inexactness restrictions for the prediction step. Second we show that the profit functions are quadratically dependent upon the step lengths, thus the optimal step lengths are obtained in the correction step. In the last part of the paper we compare the strengths of different methods based on their inexactness restrictions.

  8. Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly

    Science.gov (United States)

    Eastwell, Peter

    2012-01-01

    This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…

  9. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... the exchange of axial momentum between the tunnel and the ambient room is represented by a simple formula, derived from actuator disc computations. The correction model is validated against Navier-Stokes computations of the flow about a wind turbine rotor. Generally, the corrections from the model are in very...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  10. 利用端部效应改正的LS+AR模型进行日长变化预报%Prediction of LOD Change Based on the LS and AR Model with Edge Effect Corrected

    Institute of Scientific and Technical Information of China (English)

    刘建; 王琪洁; 张昊

    2013-01-01

    Aiming to resolve the edge effect in the process of predicting length of day (LOD) by the least squares and autoregressive (LS+AR) model,we employed a time series analysis model to extrapolate LOD series and produce a new series.Then,we used the new series to solve the coefficients for the LS model.At last,we used the LS+AR model to predict the LOD series again.By comparing the accuracy of LOD prediction by edge-effect corrected LS +AR and that by LS+AR,we conclude that edge-effect corrected LS+AR can improve the prediction accuracy,especially for medium-term and long-term predictions.%针对LS+AR模型在日长变化预报过程中存在的端部效应现象,采用时间序列分析方法对日长变化的序列进行外推,形成一个新的序列,用这个新序列求得LS模型的系数,然后再用LS+ AR模型对日长变化原始序列进行预报.实验结果表明,利用端部效应改正的LS+ AR模型与LS+ AR模型相比,在日长变化的预报精度上有一定的改善,尤其在跨度为中长期时改善更为明显.

  11. ARMA Prediction of SBAS Ephemeris and Clock Corrections for Low Earth Orbiting Satellites

    Directory of Open Access Journals (Sweden)

    Jeongrae Kim

    2015-01-01

    Full Text Available For low earth orbit (LEO satellite GPS receivers, space-based augmentation system (SBAS ephemeris/clock corrections can be applied to improve positioning accuracy in real time. The SBAS correction is only available within its service area, and the prediction of the SBAS corrections during the outage period can extend the coverage area. Two time series forecasting models, autoregressive moving average (ARMA and autoregressive (AR, are proposed to predict the corrections outside the service area. A simulated GPS satellite visibility condition is applied to the WAAS correction data, and the prediction accuracy degradation, along with the time, is investigated. Prediction results using the SBAS rate of change information are compared, and the ARMA method yields a better accuracy than the rate method. The error reductions of the ephemeris and clock by the ARMA method over the rate method are 37.8% and 38.5%, respectively. The AR method shows a slightly better orbit accuracy than the rate method, but its clock accuracy is even worse than the rate method. If the SBAS correction is sufficiently accurate comparing with the required ephemeris accuracy of a real-time navigation filter, then the predicted SBAS correction may improve orbit determination accuracy.

  12. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    only present in melanoma patients and thus were strongly associated with melanoma. The percentage of correctly classified subjects in the LR model was 74.9%, sensitivity 71%, specificity 78.7% and AUC 0.805. For the ADT percentage of correctly classified instances was 71.9%, sensitivity 71.9%, specificity 79.4% and AUC 0.808. Conclusion. Application of different models for risk assessment and prediction of melanoma should provide efficient and standardized tool in the hands of clinicians. The presented models offer effective discrimination of individuals at high risk, transparent decision making and real-time implementation suitable for clinical practice. A continuous melanoma database growth would provide for further adjustments and enhancements in model accuracy as well as offering a possibility for successful application of more advanced data mining algorithms.

  13. Causal MRI reconstruction via Kalman prediction and compressed sensing correction.

    Science.gov (United States)

    Majumdar, Angshul

    2017-02-04

    This technical note addresses the problem of causal online reconstruction of dynamic MRI, i.e. given the reconstructed frames till the previous time instant, we reconstruct the frame at the current instant. Our work follows a prediction-correction framework. Given the previous frames, the current frame is predicted based on a Kalman estimate. The difference between the estimate and the current frame is then corrected based on the k-space samples of the current frame; this reconstruction assumes that the difference is sparse. The method is compared against prior Kalman filtering based techniques and Compressed Sensing based techniques. Experimental results show that the proposed method is more accurate than these and considerably faster.

  14. A Kalman Filter Based Correction Model for Short-Term Wind Power Prediction%卡尔曼滤波修正的风电场短期功率预测模型

    Institute of Scientific and Technical Information of China (English)

    赵攀; 戴义平; 夏俊荣; 盛迎新

    2011-01-01

    A Kalman filter based correction model for short-term wind power prediction was proposed to solve the problem of wind energy prediction accuracy constraint induced by the systematic errors in meteorological parameters from the numerical weather prediction (NWP) model. The wind speed data from NWP were corrected dynamically by using the Kalman filter algorithm and the improved NWP set used for wind power prediction was formed by combining the corrected wind speed data with other meteorological data. The original neural network prediction model and the corrected neural network prediction model were trained by using the raw NWP set and the improved NWP set, respectively. The analysis on the comparison between the simulation data and the measured data in a same time interval shows that, the corrected wind speed series by the Kalman filter are very close to observed wind speed; the mean error and the mean absolute error are smaller; the root mean square error decreases from 17. 73% to 11.32%. It seems that the wind power prediction model proposed has a clearly higher accuracy.%针对数值天气预报模型输出的气象参数存在系统误差而导致风电场功率预测精度受到制约的问题,提出了一种基于卡尔曼滤波修正的风电场短期功率预测模型.使用卡尔曼滤波算法对数值天气预报输出的风速数据进行动态修正,并结合其他气象数据形成新的用于风电功率预测的修正气象数据集合;根据原始气象数据和修正气象数据这2个训练集分别建立了风电场功率输出的原始神经网络、修正神经网络的预测模型.经同一时间区间内的实测数据与模型分析数据的对比分析表明:通过卡尔曼滤波修正的风速数据能够很好地跟踪实际风速数据的变化趋势,平均误差与绝对平均误差比较小;所提模型能够显著降低预测结果的均方根误差,使其从未修正前的17.73%降低至11.32%,证明预测精度得到了明显提高.

  15. Health beliefs affect the correct replacement of daily disposable contact lenses: Predicting compliance with the Health Belief Model and the Theory of Planned Behaviour.

    Science.gov (United States)

    Livi, Stefano; Zeri, Fabrizio; Baroni, Rossella

    2017-02-01

    To assess the compliance of Daily Disposable Contact Lenses (DDCLs) wearers with replacing lenses at a manufacturer-recommended replacement frequency. To evaluate the ability of two different Health Behavioural Theories (HBT), The Health Belief Model (HBM) and The Theory of Planned Behaviour (TPB), in predicting compliance. A multi-centre survey was conducted using a questionnaire completed anonymously by contact lens wearers during the purchase of DDCLs. Three hundred and fifty-four questionnaires were returned. The survey comprised 58.5% females and 41.5% males (mean age 34±12years). Twenty-three percent of respondents were non-compliant with manufacturer-recommended replacement frequency (re-using DDCLs at least once). The main reason for re-using DDCLs was "to save money" (35%). Predictions of compliance behaviour (past behaviour or future intentions) on the basis of the two HBT was investigated through logistic regression analysis: both TPB factors (subjective norms and perceived behavioural control) were significant (pbehaviour and future intentions) and perceived benefit (only for past behaviour) as significant factors (pmodel show that the involvement of persons socially close to the wearers (subjective norms) and the improvement of the procedure of behavioural control of daily replacement (behavioural control) are of paramount importance in improving compliance. With reference to the HBM, it is important to warn DDCLs wearers of the severity of a contact-lens-related eye infection, and to underline the possibility of its prevention. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  16. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...

  17. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  18. Predictor-based error correction method in short-term climate prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In terms of the basic idea of combining dynamical and statistical methods in short-term climate prediction, a new prediction method of predictor-based error correction (PREC) is put forward in order to effectively use statistical experiences in dynamical prediction. Analyses show that the PREC can reasonably utilize the significant correlations between predictors and model prediction errors and correct prediction errors by establishing statistical prediction model. Besides, the PREC is further applied to the cross-validation experiments of dynamical seasonal prediction on the operational atmosphere-ocean coupled general circulation model of China Meteorological Administration/National Climate Center by selecting the sea surface temperature index in Ni(n)o3 region as the physical predictor that represents the prevailing ENSO-cycle mode of interannual variability in climate system. It is shown from the prediction results of summer mean circulation and total precipitation that the PREC can improve predictive skills to some extent. Thus the PREC provides a new approach for improving short-term climate prediction.

  19. G-corrected holographic dark energy model

    CERN Document Server

    Malekjani, M

    2013-01-01

    Here we investigate the holographic dark energy model in the framework of FRW cosmology where the Newtonian gravitational constant,$G$, is varying with cosmic time. Using the complementary astronomical data which support the time dependency of $G$, the evolutionary treatment of EoS parameter and energy density of dark energy model are calculated in the presence of time variation of $G$. It has been shown that in this case, the phantom regime can be achieved at the present time. We also calculate the evolution of $G$- corrected deceleration parameter for holographic dark energy model and show that the dependency of $G$ on the comic time can influence on the transition epoch from decelerated expansion to the accelerated phase. Finally we perform the statefinder analysis for $G$- corrected holographic model and show that this model has a shorter distance from the observational point in $s-r$ plane compare with original holographic dark energy model.

  20. Evaluation of multiple protein docking structures using correctly predicted pairwise subunits

    Directory of Open Access Journals (Sweden)

    Esquivel-Rodríguez Juan

    2012-03-01

    Full Text Available Abstract Background Many functionally important proteins in a cell form complexes with multiple chains. Therefore, computational prediction of multiple protein complexes is an important task in bioinformatics. In the development of multiple protein docking methods, it is important to establish a metric for evaluating prediction results in a reasonable and practical fashion. However, since there are only few works done in developing methods for multiple protein docking, there is no study that investigates how accurate structural models of multiple protein complexes should be to allow scientists to gain biological insights. Methods We generated a series of predicted models (decoys of various accuracies by our multiple protein docking pipeline, Multi-LZerD, for three multi-chain complexes with 3, 4, and 6 chains. We analyzed the decoys in terms of the number of correctly predicted pair conformations in the decoys. Results and conclusion We found that pairs of chains with the correct mutual orientation exist even in the decoys with a large overall root mean square deviation (RMSD to the native. Therefore, in addition to a global structure similarity measure, such as the global RMSD, the quality of models for multiple chain complexes can be better evaluated by using the local measurement, the number of chain pairs with correct mutual orientation. We termed the fraction of correctly predicted pairs (RMSD at the interface of less than 4.0Å as fpair and propose to use it for evaluation of the accuracy of multiple protein docking.

  1. Analogue correction method of errors and its applicatim to numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    Gao Li; Ren Hong-Li; Li Jian-Ping; Chou Ji-Fan

    2006-01-01

    In this paper,an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP).The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model.Furthermore.in the ACE.the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors.The results of daily,decad and monthly prediction experiments On a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction,but is also better than that of the T63 model.

  2. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  3. Signatures of Planck corrections in a spiralling axion inflation model

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, John [Dept. of Physics, University of Lancaster,Lancaster LA1 4YB (United Kingdom)

    2015-05-08

    The minimal sub-Planckian axion inflation model accounts for a large scalar-to-tensor ratio via a spiralling trajectory in the field space of a complex field Φ. Here we consider how the predictions of the model are modified by Planck scale-suppressed corrections. In the absence of Planck corrections the model is equivalent to a ϕ{sup 4/3} chaotic inflation model. Planck corrections become important when the dimensionless coupling ξ of |Φ|{sup 2} to the topological charge density of the strongly-coupled gauge sector FF{sup ~} satisfies ξ∼1. For values of |Φ| which allow the Planck corrections to be understood via an expansion in powers of |Φ|{sup 2}/M{sub Pl}{sup 2}, we show that their effect is to produce a significant modification of the tensor-to-scalar ratio from its ϕ{sup 4/3} chaotic inflation value without strongly modifying the spectral index. In addition, to leading order in |Φ|{sup 2}/M{sub Pl}{sup 2}, the Planck modifications of n{sub s} and r satisfy a consistency relation, Δn{sub s}=−Δr/16. Observation of these modifications and their correlation would allow the model to be distinguished from a simple ϕ{sup 4/3} chaotic inflation model and would also provide a signature for the influence of leading-order Planck corrections.

  4. Herschel SPIRE FTS telescope model correction

    CERN Document Server

    Hopwood, Rosalind; Polehampton, Edward T; Valtchanov, Ivan; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Pearson, Chris P; Swinyard, Bruce M

    2014-01-01

    Emission from the Herschel telescope is the dominant source of radiation for the majority of SPIRE Fourier transform spectrometer (FTS) observations, despite the exceptionally low emissivity of the primary and secondary mirrors. Accurate modelling and removal of the telescope contribution is, therefore, an important and challenging aspect of FTS calibration and data reduction pipeline. A dust-contaminated telescope model with time invariant mirror emissivity was adopted before the Herschel launch. However, measured FTS spectra show a clear evolution of the telescope contribution over the mission and strong need for a correction to the standard telescope model in order to reduce residual background (of up to 7 Jy) in the final data products. Systematic changes in observations of dark sky, taken over the course of the mission, provide a measure of the evolution between observed telescope emission and the telescope model. These dark sky observations have been used to derive a time dependent correction to the tel...

  5. Radiative corrections to the Higgs couplings in the triplet model

    CERN Document Server

    Kikuchi, Mariko

    2013-01-01

    The feature of extended Higgs models can appear in the pattern of deviations from the Standard Model (SM) predictions in coupling constants of the SM-like Higgs boson ($h$). We can thus discriminate extended Higgs models by precisely measuring the pattern of deviations in the coupling constants of $h$, even when extra bosons are not found directly. In order to compare the theoretical predictions to the future precision data at the ILC, we must evaluate the theoretical predictions with radiative corrections in various extended Higgs models. In this talk, we give our comprehensive study for radiative corrections to various Higgs boson couplings of $h$ in the minimal Higgs triplet model (HTM). First, we define renormalization conditions in the model, and we calculate the Higgs coupling; $g\\gamma\\gamma, hWW, hZZ$ and $hhh$ at the one loop level. We then evaluate deviations in coupling constants of the SM-like Higgs boson from the predictions in the SM. We find that one-loop contributions to these couplings are su...

  6. Empirical correction of a toy climate model

    CERN Document Server

    Allgaier, Nicholas A; Danforth, Christopher M

    2011-01-01

    Improving the accuracy of forecast models for physical systems such as the atmosphere is a crucial ongoing effort. Errors in state estimation for these often highly nonlinear systems has been the primary focus of recent research, but as that error has been successfully diminished, the role of model error in forecast uncertainty has duly increased. The present study is an investigation of a particular empirical correction procedure that is of special interest because it considers the model a "black box", and therefore can be applied widely with little modification. The procedure involves the comparison of short model forecasts with a reference "truth" system during a training period in order to calculate systematic (1) state-independent model bias and (2) state-dependent error patterns. An estimate of the likelihood of the latter error component is computed from the current state at every timestep of model integration. The effectiveness of this technique is explored in two experiments: (1) a perfect model scen...

  7. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  8. On INM's Use of Corrected Net Thrust for the Prediction of Jet Aircraft Noise

    Science.gov (United States)

    McAninch, Gerry L.; Shepherd, Kevin P.

    2011-01-01

    The Federal Aviation Administration s (FAA) Integrated Noise Model (INM) employs a prediction methodology that relies on corrected net thrust as the sole correlating parameter between aircraft and engine operating states and aircraft noise. Thus aircraft noise measured for one set of atmospheric and aircraft operating conditions is assumed to be applicable to all other conditions as long as the corrected net thrust remains constant. This hypothesis is investigated under two primary assumptions: (1) the sound field generated by the aircraft is dominated by jet noise, and (2) the sound field generated by the jet flow is adequately described by Lighthill s theory of noise generated by turbulence.

  9. 宁夏本地化WRF辐射预报订正及光伏发电功率预测方法初探%Correction of Solar Radiation Forecast and Photovoltaic Power Prediction in Ningxia by Localization WRF Model

    Institute of Scientific and Technical Information of China (English)

    孙银川; 白永清; 左河疆

    2012-01-01

    为了开展西北地区宁夏太阳能光伏发电气象预报服务,基于本地化WRF模式产品及当地光伏电站提供的发电功率资料,提出了EOF分析结合MOS预报的技术方法,进行模式辐射预报的统计订正研究,建立逐时光伏发电功率预测模型,得到较为理想的结果。通过EOF-MOS方法进行辐射预报订正,可使辐照度平均绝对百分比误差(MAPE)由原来24%左右降低到15%左右,模型预测发电功率MAPE稳定在22%左右。尤其是对于转折性天气趋势把握,具有较高的参考价值。%In order to carry out solar photovoltaic power generation forecast service in Ningxia of northwestern China,we used the combination of EOF analysis and MOS forecast method to correct the shortwave radiation WRF model based on the localized WRF model and power generation data from the local photovoltaic power station,and established an hourly photovoltaic power prediction model.Forecast revised by the EOF-MOS method could cause the mean absolute percentage error(MAPE) in solar irradiance forecast reduced from 24% to 15%,and the MAPE in photovoltaic power forecast be stable around 22%.The forecast method presents a good prediction result,especially for the forecast in frequently variable weather.

  10. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  11. A Form—Correcting System of Chinese Characters Using a Model of Correcting Procedures of Calligraphists

    Institute of Scientific and Technical Information of China (English)

    曾建超; HidehikoSanada; 等

    1995-01-01

    A support system for form-correction of Chinese Characters is developed based upon a generation model SAM,and its feasibility is evaluated.SAM is excellent as a model for generating Chinese characters,but it is difficult to determine appropriate parameters because the use of calligraphic knowledge is needed.by noticing that calligraphic knowledge of calligraphists is included in their corrective actions, we adopt a strategy to acquire calligraphic knowledge by monitoring,recording and analyzing corrective actions of calligraphists,and try to realize an environment under which calligraphists can easily make corrections to character forms and which can record corrective actions of calligraphists without interfering with them.In this paper,we first construct a model of correcting procedures of calligraphists,which is composed of typical correcting procedures that are acquired by extensively observing their corrective actions and interviewing them,and develop a form-correcting system for brush-written Chinese characters by using the model.Secondly,through actual correcting experiments,we demonstrate that parameters within SAM can be easily corrected at the level of character patterns by our system,and show that it is effective and easy for calligraphists to be used by evaluating effectiveness of the correcting model,sufficiency of its functions and execution speed.

  12. Do micromagnetic simulations correctly predict hard magnetic hysteresis properties?

    Energy Technology Data Exchange (ETDEWEB)

    Toson, P., E-mail: peter.toson@tuwien.ac.at; Zickler, G.A.; Fidler, J.

    2016-04-01

    Micromagnetic calculations using the finite element technique describe semi-quantitatively the coercivity of novel rare earth permanent magnets in dependence on grain size, grain shape, grain alignment and composition of grain boundaries and grain boundary junctions and allow the quantitative prediction of magnetic hysteretic properties of rare earth free magnets based on densely packed elongated Fe and Co nanoparticles, which depend on crystal anisotropy, aspect ratio and packing density. The nucleation of reversed domains preferentially takes place at grain boundary junctions in granular sintered and melt-spun magnets independently on the grain size. The microstructure and the nanocompostion of the intergranular regions are inhomogeneous and too complex in order to make an exact model for micromagnetic simulations and to allow a quantitative prediction. The incoherent magnetization reversal processes near the end surfaces reduce and determine the coercive field values of Co- and Fe-based nanoparticles.

  13. Predicting Correct Body Posture based on Theory of Planned Behavior in Iranian Operating Room Nurses

    National Research Council Canada - National Science Library

    BAHAREH ABEDI; RABIOLLAH FARMANBAR1; SAEED OMIDI; MAHDI JAHANGIR BLOURCHIAN

    2015-01-01

    Due to the importance of correct posture for preventing musculoskeletal disorders, the purpose of this study was to evaluate Theory of Planned Behavior in Predicting correct Body Posture in operating room...

  14. Strategies for Determining Correct Cytochrome P450 Contributions in Hepatic Clearance Predictions: In Vitro-In Vivo Extrapolation as Modelling Approach and Tramadol as Proof-of Concept Compound.

    Science.gov (United States)

    T'jollyn, Huybrecht; Snoeys, Jan; Van Bocxlaer, Jan; De Bock, Lies; Annaert, Pieter; Van Peer, Achiel; Allegaert, Karel; Mannens, Geert; Vermeulen, An; Boussery, Koen

    2017-06-01

    Although the measurement of cytochrome P450 (CYP) contributions in metabolism assays is straightforward, determination of actual in vivo contributions might be challenging. How representative are in vitro for in vivo CYP contributions? This article proposes an improved strategy for the determination of in vivo CYP enzyme-specific metabolic contributions, based on in vitro data, using an in vitro-in vivo extrapolation (IVIVE) approach. Approaches are exemplified using tramadol as model compound, and CYP2D6 and CYP3A4 as involved enzymes. Metabolism data for tramadol and for the probe substrates midazolam (CYP3A4) and dextromethorphan (CYP2D6) were gathered in human liver microsomes (HLM) and recombinant human enzyme systems (rhCYP). From these probe substrates, an activity-adjustment factor (AAF) was calculated per CYP enzyme, for the determination of correct hepatic clearance contributions. As a reference, tramadol CYP contributions were scaled-back from in vivo data (retrograde approach) and were compared with the ones derived in vitro. In this view, the AAF is an enzyme-specific factor, calculated from reference probe activity measurements in vitro and in vivo, that allows appropriate scaling of a test drug's in vitro activity to the 'healthy volunteer' population level. Calculation of an AAF, thus accounts for any 'experimental' or 'batch-specific' activity difference between in vitro HLM and in vivo derived activity. In this specific HLM batch, for CYP3A4 and CYP2D6, an AAF of 0.91 and 1.97 was calculated, respectively. This implies that, in this batch, the in vitro CYP3A4 activity is 1.10-fold higher and the CYP2D6 activity 1.97-fold lower, compared to in vivo derived CYP activities. This study shows that, in cases where the HLM pool does not represent the typical mean population CYP activities, AAF correction of in vitro metabolism data, optimizes CYP contributions in the prediction of hepatic clearance. Therefore, in vitro parameters for any test compound

  15. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  16. Multiscale error analysis, correction, and predictive uncertainty estimation in a flood forecasting system

    Science.gov (United States)

    Bogner, K.; Pappenberger, F.

    2011-07-01

    River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.

  17. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  18. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  19. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  20. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  1. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  2. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  3. "The next Big Five Inventory (BFI-2): Developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power": Correction to Soto and John (2016).

    Science.gov (United States)

    2017-07-01

    Reports an error in "The Next Big Five Inventory (BFI-2): Developing and Assessing a Hierarchical Model With 15 Facets to Enhance Bandwidth, Fidelity, and Predictive Power" by Christopher J. Soto and Oliver P. John (Journal of Personality and Social Psychology, Advanced Online Publication, Apr 7, 2016, np). In the article, all citations to McCrae and Costa (2008), except for the instance in which it appears in the first paragraph of the introduction, should instead appear as McCrae and Costa (2010). The complete citation should read as follows: McCrae, R. R., & Costa, P. T. (2010). NEO Inventories professional manual. Lutz, FL: Psychological Assessment Resources. The attribution to the BFI-2 items that appears in the Table 6 note should read as follows: BFI-2 items adapted from "Conceptualization, Development, and Initial Validation of the Big Five Inventory-2," by C. J. Soto and O. P. John, 2015, Paper presented at the biennial meeting of the Association for Research in Personality. Copyright 2015 by Oliver P. John and Christopher J. Soto. The complete citation in the References list should appear as follows: Soto, C. J., & John, O. P. (2015, June). Conceptualization, development, and initial validation of the Big Five Inventory-2. Paper presented at the biennial meeting of the Association for Research in Personality, St. Louis, MO. Available from http://www.colby.edu/psych/personality-lab/ All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-17156-001.) Three studies were conducted to develop and validate the Big Five Inventory-2 (BFI-2), a major revision of the Big Five Inventory (BFI). Study 1 specified a hierarchical model of personality structure with 15 facet traits nested within the Big Five domains, and developed a preliminary item pool to measure this structure. Study 2 used conceptual and empirical criteria to construct the BFI-2 domain and facet scales from the preliminary item pool

  4. A prediction-correction scheme for microchannel milling using femtosecond laser

    Science.gov (United States)

    Chen, Jianxiong; Zhou, Xiaolong; Lin, Shuwen; Tu, Yiliu

    2017-04-01

    In this paper, a prediction-correction scheme is proposed to online measure and regulate the milling depth of microchannel using an indicator of laser triggered plasma. Firstly, a prediction model, with respect to the laser fluence and feedrate, is established with several calibration tests using the least square fitting method. It is utilized to change the focal position of objective to track the depth evolution of newly generated surface. Meanwhile, a scanning path for every milling layer with an offset in Z-axis at the beginning and the end of the trajectory, is developed to drive the plasma brightness periodically changing. Then, the milling depth could be obtained when the brightness reaches to the maximum value. By doing so, an online measurement method is presented to estimate the milling depth using the trend of plasma brightness. Furthermore, a correction model is developed to iteratively adjust the feedrate with the online estimated depth. Therefore, the microchannel milling process could be monitored and controlled in a closed-loop manner, in order to accurately regulate the milling depth. Finally, an online measurement and closed-loop microchannel milling is carried out on the self-developed micro-machining center. The effectiveness and correctness of the proposed method are verified by comparing the estimated depth with the actually measured results.

  5. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    Science.gov (United States)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the

  6. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  7. Bias correcting precipitation forecasts for extended-range skilful seasonal streamflow predictions

    Science.gov (United States)

    Crochemore, Louise; Ramos, Maria-Helena; Pappenberger, Florian

    2016-04-01

    Meteorological centres make sustained efforts to provide seasonal forecasts that are increasingly skilful, which has the potential to also benefit streamflow forecasting. Seasonal streamflow forecasts can help to take anticipatory measures for a range of applications, such as water supply or hydropower reservoir operation and drought risk management. This study assesses the skill of seasonal precipitation and streamflow forecasts in France in order to provide insights into the way bias correcting seasonal precipitation forecasts can contribute to maintain skill of seasonal flow predictions at extended lead times. First, we evaluate the skill of raw (i.e., without bias correction) seasonal precipitation ensemble forecasts for streamflow forecasting in sixteen French catchments. A lumped daily hydrological model is applied at the catchment scale to transform precipitation into streamflow. A reference prediction system based on historic observed precipitation and watershed initial conditions at the time of forecast (i.e., ESP method) is used as benchmark. In a second step, we apply eight variants of bias correction approaches to the precipitation forecasts prior to generating the flow forecasts. The approaches were based on the linear scaling and the distribution mapping methods. The skill of the ensemble forecasts is assessed in reliability, sharpness, accuracy, and overall performance. The results show that, in most catchments, raw seasonal precipitation and streamflow forecasts are often more skilful than the conventional ESP method in terms of sharpness. However, reliability is an attribute that is not significantly improved. Forecast skill is generally improved when applying bias correction. Two bias correction methods showed the best performance for the studied catchments, with, however, each method being more successful in improving specific attributes of forecast quality: the simple linear scaling of monthly values contributed mainly to increase forecast

  8. Wavelet based error correction and predictive uncertainty of a hydrological forecasting system

    Science.gov (United States)

    Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad

    2010-05-01

    River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system

  9. When your words count: a discriminative model to predict approval of referrals

    Directory of Open Access Journals (Sweden)

    Adol Esquivel

    2009-12-01

    Conclusions Three iterations of the model correctly predicted at least 75% of the approved referrals in the validation set. A correct prediction of whether or not a referral will be approved can be made in three out of four cases.

  10. Renormalisation Group Corrections to the Littlest Seesaw Model and Maximal Atmospheric Mixing

    CERN Document Server

    King, Stephen F; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, $\\theta_{23}=45^\\circ \\pm 1^\\circ$, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  11. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  12. Vermont "Hydrologically Corrected" Digital Elevation Model (VTHYDRODEM)

    Data.gov (United States)

    Vermont Center for Geographic Information — VTHYDRODEM was created to produce a "hydrologically correct" DEM, compliant with the Vermont Hydrography Dataset (VHD) in support of the "flow regime" project whose...

  13. Evaluation of bias correction methods for wave modeling output

    Science.gov (United States)

    Parker, K.; Hill, D. F.

    2017-02-01

    Models that seek to predict environmental variables invariably demonstrate bias when compared to observations. Bias correction (BC) techniques are common in the climate and hydrological modeling communities, but have seen fewer applications to the field of wave modeling. In particular there has been no investigation as to which BC methodology performs best for wave modeling. This paper introduces and compares a subset of BC methods with the goal of clarifying a "best practice" methodology for application of BC in studies of wave-related processes. Specific focus is paid to comparing parametric vs. empirical methods as well as univariate vs. bivariate methods. The techniques are tested on global WAVEWATCH III historic and future period datasets with comparison to buoy observations at multiple locations. Both wave height and period are considered in order to investigate BC effects on inter-variable correlation. Results show that all methods perform uniformly in terms of correcting statistical moments for individual variables with the exception of a copula based method underperforming for wave period. When comparing parametric and empirical methods, no difference is found. Between bivariate and univariate methods, results show that bivariate methods greatly improve inter-variable correlations. Of the bivariate methods tested the copula based method is found to be not as effective at correcting correlation while a "shuffling" method is unable to handle changes in correlation from historic to future periods. In summary, this study demonstrates that BC methods are effective when applied to wave model data and that it is essential to employ methods that consider dependence between variables.

  14. Sequential correction of ensemble regional weather predictions for forecasting reference evapotranspiration

    Science.gov (United States)

    Pelosi, Anna; Medina Gonzalez, Hanoi; Villani, Paolo; D'Urso, Guido; Battista Chirico, Giovanni

    2016-04-01

    This study explores the performance of an adaptive procedure for correcting the ensemble numerical weather outputs applied to the probabilistic forecast of reference evapotranspiration (ETo). This procedure is proposed as an effective forecast correction method when the available dataset is not large enough for the calibration of statistical batch procedures. The numerical weather prediction outputs are those provided by COSMO-LEPS, an ensemble-based Limited Area Model, with 16 members and 7.5 km spatial resolution, with forecast lead-time up to 5 days. ETo forecasts are computed according to the FAO Penman-Monteith (FAO-PM) equation, which requires data of five weather variables: air temperature, relative humidity, solar radiation and wind speed. The performance of the proposed procedure is evaluated at eighteen monitoring stations, located in Campania region (Southern Italy), with two alternative strategies: i) correction applied to the raw ensemble forecasts of the five weather variables prior applying the FAO-PM equation; ii) correction applied to the ensemble output of the ETo forecasts obtained with FAO-PM equation after using the raw ensemble weather forecasts as input. In both cases the suggested post-processing procedure was able to significantly increase the accuracy and reduce the uncertainty of the ETo forecasts.

  15. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  16. IMPACT OF DIFFERENT TOPOGRAPHIC CORRECTIONS ON PREDICTION ACCURACY OF FOLIAGE PROJECTIVE COVER (FPC IN A TOPOGRAPHICALLY COMPLEX TERRAIN

    Directory of Open Access Journals (Sweden)

    S. Ediriweera

    2012-07-01

    Full Text Available Quantitative retrieval of land surface biological parameters (e.g. foliage projective cover [FPC] and Leaf Area Index is crucial for forest management, ecosystem modelling, and global change monitoring applications. Currently, remote sensing is a widely adopted method for rapid estimation of surface biological parameters in a landscape scale. Topographic correction is a necessary pre-processing step in the remote sensing application for topographically complex terrain. Selection of a suitable topographic correction method on remotely sensed spectral information is still an unresolved problem. The purpose of this study is to assess the impact of topographic corrections on the prediction of FPC in hilly terrain using an established regression model. Five established topographic corrections [C, Minnaert, SCS, SCS+C and processing scheme for standardised surface reflectance (PSSSR] were evaluated on Landsat TM5 acquired under low and high sun angles in closed canopied subtropical rainforest and eucalyptus dominated open canopied forest, north-eastern Australia. The effectiveness of methods at normalizing topographic influence, preserving biophysical spectral information, and internal data variability were assessed by statistical analysis and by comparing field collected FPC data. The results of statistical analyses show that SCS+C and PSSSR perform significantly better than other corrections, which were on less overcorrected areas of faintly illuminated slopes. However, the best relationship between FPC and Landsat spectral responses was obtained with the PSSSR by producing the least residual error. The SCS correction method was poor for correction of topographic effect in predicting FPC in topographically complex terrain.

  17. Paralegals in Corrections: A Proposed Model.

    Science.gov (United States)

    McShane, Marilyn D.

    1987-01-01

    Describes the legal assistance program currently offered by the Texas Department of Corrections which demonstrates the wide range of questions and problems that the paralegal can address. Reviews paralegal's functions in the prison setting and the services they can provide in assisting prisoners to maintain their rights. (Author/ABB)

  18. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  19. Predicting the Sparticle Spectrum from GUTs via SUSY Threshold Corrections with SusyTC

    CERN Document Server

    Antusch, Stefan

    2015-01-01

    Grand Unified Theories (GUTs) can feature predictions for the ratios of quark and lepton Yukawa couplings at high energy, which can be tested with the increasingly precise results for the fermion masses, given at low energies. To perform such tests, the renormalization group (RG) running has to be performed with sufficient accuracy. In supersymmetric (SUSY) theories, the one-loop threshold corrections (TC) are of particular importance and, since they affect the quark-lepton mass relations, link a given GUT flavour model to the sparticle spectrum. To accurately study such predictions, we extend and generalize various formulas in the literature which are needed for a precision analysis of SUSY flavour GUT models. We introduce the new software tool SusyTC, a major extension to the Mathematica package REAP, where these formulas are implemented. SusyTC extends the functionality of REAP by a full inclusion of the (complex) MSSM SUSY sector and a careful calculation of the one-loop SUSY threshold corrections for the...

  20. Model based correction of placement error in EBL and its verification

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-05-01

    In maskmaking, the main source of error contributing to placement error is charging. DISPLACE software corrects the placement error for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. One important step is the calibration of physical model. A test layout on a single calibration mask was used for calibration. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for the verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE. A good correlation of the measured and predicted values of the correction confirmed the high accuracy of the charging placement error correction.

  1. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  2. A two-dimensional matrix correction for off-axis portal dose prediction errors

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Kumaraswamy, Lalith; Bakhtiari, Mohammad [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Podgorsak, Matthew B. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone

  3. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  4. BDDCS Predictions, Self-Correcting Aspects of BDDCS Assignments, BDDCS Assignment Corrections, and Classification for more than 175 Additional Drugs.

    Science.gov (United States)

    Hosey, Chelsea M; Chan, Rosa; Benet, Leslie Z

    2016-01-01

    The biopharmaceutics drug disposition classification system was developed in 2005 by Wu and Benet as a tool to predict metabolizing enzyme and drug transporter effects on drug disposition. The system was modified from the biopharmaceutics classification system and classifies drugs according to their extent of metabolism and their water solubility. By 2010, Benet et al. had classified over 900 drugs. In this paper, we incorporate more than 175 drugs into the system and amend the classification of 13 drugs. We discuss current and additional applications of BDDCS, which include predicting drug-drug and endogenous substrate interactions, pharmacogenomic effects, food effects, elimination routes, central nervous system exposure, toxicity, and environmental impacts of drugs. When predictions and classes are not aligned, the system detects an error and is able to self-correct, generally indicating a problem with initial class assignment and/or measurements determining such assignments.

  5. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  6. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  7. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  8. Holographic p-wave superconductor models with Weyl corrections

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu [Institute of Physics and Department of Physics, Hunan Normal University, Changsha, Hunan 410081 (China); Key Laboratory of Low Dimensional Quantum Structures and Quantum Control of Ministry of Education, Hunan Normal University, Changsha, Hunan 410081 (China); Pan, Qiyuan, E-mail: panqiyuan@126.com [Institute of Physics and Department of Physics, Hunan Normal University, Changsha, Hunan 410081 (China); Key Laboratory of Low Dimensional Quantum Structures and Quantum Control of Ministry of Education, Hunan Normal University, Changsha, Hunan 410081 (China); Instituto de Física, Universidade de São Paulo, CP 66318, São Paulo 05315-970 (Brazil); Jing, Jiliang, E-mail: jljing@hunnu.edu.cn [Institute of Physics and Department of Physics, Hunan Normal University, Changsha, Hunan 410081 (China); Key Laboratory of Low Dimensional Quantum Structures and Quantum Control of Ministry of Education, Hunan Normal University, Changsha, Hunan 410081 (China)

    2015-04-09

    We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  9. Holographic p-wave superconductor models with Weyl corrections

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-04-01

    Full Text Available We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  10. Holographic p-wave superconductor models with Weyl corrections

    CERN Document Server

    Zhang, Lu; Jing, Jiliang

    2015-01-01

    We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang-Mills theory. However, in the black hole background, we observe that similar to the Weyl correction effects in the Yang-Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  11. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... but with clearly defined failure modes, the MCFM can bestarted from each idealized single mode limit state in turn to identify a locally most central point on the elaborate limitstate surface. Typically this procedure leads to a fewer number of locally most central failure points on the elaboratelimit state...... surface than existing in the idealized model....

  12. Holographic superconductor models with the Maxwell field strength corrections

    CERN Document Server

    Pan, Qiyuan; Wang, Bin

    2011-01-01

    We study the effect of the quadratic field strength correction to the usual Maxwell field on the holographic dual models in the backgrounds of AdS black hole and AdS soliton. We find that in the black hole background, the higher correction to the Maxwell field makes the condensation harder to form and changes the expected relation in the gap frequency. This effect is similar to that caused by the curvature correction. However, in the soliton background we find that different from the curvature effect, the correction to the Maxwell field does not influence the holographic superconductor and insulator phase transition.

  13. Age-correction of test scores reduces the validity of mild cognitive impairment in predicting progression to dementia.

    Directory of Open Access Journals (Sweden)

    Johannes Hessler

    Full Text Available A phase of mild cognitive impairment (MCI precedes most forms of neurodegenerative dementia. Many definitions of MCI recommend the use of test norms to diagnose cognitive impairment. It is, however, unclear whether the use of norms actually improves the detection of individuals at risk of dementia. Therefore, the effects of age- and education-norms on the validity of test scores in predicting progression to dementia were investigated.Baseline cognitive test scores (Syndrome Short Test of dementia-free participants aged ≥65 were used to predict progression to dementia within three years. Participants were comprehensively examined one, two, and three years after baseline. Test scores were calculated with correction for (1 age and education, (2 education only, (3 age only and (4 without correction. Predictive validity was estimated with Cox proportional hazard regressions. Areas under the curve (AUCs were calculated for the one-, two-, and three-year intervals.82 (15.3% of initially 537 participants, developed dementia. Model coefficients, hazard ratios, and AUCs of all scores were significant (p<0.001. Predictive validity was the lowest with age-corrected scores (-2 log likelihood  = 840.90, model fit χ2 (1  = 144.27, HR  = 1.33, AUCs between 0.73 and 0.87 and the highest with education-corrected scores (-2 log likelihood  = 815.80, model fit χ2 (1  = 171.16, HR  = 1.34, AUCs between 0.85 and 0.88.The predictive validity of test scores is markedly reduced by age-correction. Therefore, definitions of MCI should not recommend the use of age-norms in order to improve the detection of individuals at risk of dementia.

  14. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  15. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  16. Climate model bias correction and the role of timescales

    Directory of Open Access Journals (Sweden)

    J. O. Haerter

    2010-10-01

    Full Text Available It is well known that output from climate models cannot be used to force hydrological simulations without some form of preprocessing to remove the existing biases. In principle, statistical bias correction methodologies act on model output so the statistical properties of the corrected data match those of the observations. However the improvements to the statistical properties of the data are limited to the specific time scale of the fluctuations that are considered. For example, a statistical bias correction methodology for mean daily values might be detrimental to monthly statistics. Also, in applying bias corrections derived from present day to scenario simulations, an assumption is made of persistence of the bias over the largest timescales. We examine the effects of mixing fluctuations on different time scales and suggest an improved statistical methodology, referred to here as a cascade bias correction method, that eliminates, or greatly reduces, the negative effects.

  17. Relativistic Corrections to the Bohr Model of the Atom

    Science.gov (United States)

    Kraft, David W.

    1974-01-01

    Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)

  18. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  19. Correction

    CERN Document Server

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  20. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    but intractable correction, and can be applied to the model's partition function and other moments of interest. The correction is expressed over the higher-order cumulants which are neglected by EP's local matching of moments. Through the expansion, we see that EP is correct to first order. By considering higher...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  1. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  2. DYNAMIC CORRECTION OF ROUGHNESS IN THE HYDRODYNAMIC MODEL

    Institute of Scientific and Technical Information of China (English)

    BAO Wei-min; ZHANG Xiao-qin; QU Si-min

    2009-01-01

    Based on the hydrodynamic model and the Xinanjiang model, the river stage forecasting model has been proposed. But its performance is not satisfactory as applied to estuary areas. River roughness is a sensitive parameter in the hydrodynamic model, and its value is related to some substantial uncertainties in the tidal river. According to roughness tests, a new method of roughness dynamic correction was developed to improve the performance of the stage model. The method was focused on the usage of observed data for the studied section, and its parameters were analyzed. Nested with the dynamic correction of roughness, the stage model was applied to the tidal reach of the Caoe River. The results demonstrate that the roughness dynamic correction can improve the simulation accuracy of the stage model, and especially has the capacity of reducing the errors at peak stages.

  3. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  4. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  5. Inflation via logarithmic entropy-corrected holographic dark energy model

    CERN Document Server

    Darabi, F; Setare, M R

    2016-01-01

    We study the inflation via logarithmic entropy-corrected holographic dark energy LECHDE model with future event horizon, particle horizon and Hubble horizon cut-offs, and compare the results with those of obtained in the study of inflation by holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in HDE model. Moreover, the consistency with the observational data in LECHDE model of inflation, constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections.

  6. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Predicting Correct Body Posture based on Theory of Planned Behavior in Iranian Operating Room Nurses

    OpenAIRE

    BAHAREH ABEDI; RABIOLLAH FARMANBAR1; SAEED OMIDI; MAHDI JAHANGIR BLOURCHIAN

    2015-01-01

    Due to the importance of correct posture for preventing musculoskeletal disorders, the purpose of this study was to evaluate Theory of Planned Behavior in Predicting correct Body Posture in operating room nurses.In this cross-sectional study, participants (n=100) were nurses from five hospitals located in northern Iran. Participants completed demographic data and theory of planned behavior construct Questionnaires. In addition, the researcher checked the Body Posture of nurses by Rapid Entire...

  8. Correction, improvement and model verification of CARE 3, version 3

    Science.gov (United States)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.

    1987-01-01

    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  9. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  10. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  11. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  12. Forecasting the Euro exchange rate using vector error correction models

    NARCIS (Netherlands)

    Aarle, B. van; Bos, M.; Hlouskova, J.

    2000-01-01

    Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex

  13. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  14. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  15. Validity of Viscous Core Correction Models for Self-Induced Velocity Calculations

    CERN Document Server

    Van Hoydonck, Wim

    2012-01-01

    Viscous core correction models are used in free wake simulations to remove the infinite velocities at the vortex centreline. It will be shown that the assumption that these corrections converge to the Biot-Savart law in the far field is not correct for points near the tangent line of a vortex segment. Furthermore, the self-induced velocity of a vortex ring with a viscous core is shown to converge to the wrong value. The source of these errors in the model is identified and an improved model is presented that rectifies the errors. It results in correct values for the self-induced velocity of a viscous vortex ring and induced velocities that converge to the values predicted by the Biot-Savart law for all points in the far field.

  16. Prediction of Deformity Correction by Pedicle Screw Instrumentation in Thoracolumbar Scoliosis Surgery

    Science.gov (United States)

    Kiriyama, Yoshimori; Yamazaki, Nobutoshi; Nagura, Takeo; Matsumoto, Morio; Chiba, Kazuhiro; Toyama, Yoshiaki

    In segmental pedicle screw instrumentation, the relationship between the combinations of pedicle screw placements and the degree of deformity correction was investigated with a three-dimensional rigid body and spring model. The virtual thoracolumbar scoliosis (Cobb’s angle of 47 deg.) was corrected using six different combinations of pedicle-screw placements. As a result, better correction in the axial rotation was obtained with the pedicle screws placed at or close to the apical vertebra than with the screws placed close to the end vertebrae, while the correction in the frontal plane was better with the screws close to the end vertebrae than with those close to the apical vertebra. Additionally, two screws placed in the convex side above and below the apical vertebra provided better correction than two screws placed in the concave side. Effective deformity corrections of scoliosis were obtained with the proper combinations of pedicle screw placements.

  17. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  18. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  19. A Simple and Effective Closed-Form GN-Model Correction Formula Accounting for Signal Non-Gaussian Distribution

    CERN Document Server

    Carena, A; Curri, V; Poggiolini, P; Jiang, Y; Forghieri, F

    2014-01-01

    The GN-model has been shown to overestimate the variance of non-linearity due to the signal Gaussianity approximation, leading to realistic system maximum reach predictions which may be pessimistic by about 5% to 15%, depending on fiber type and system set-up. Analytical corrections have been proposed, which however substantially increase the model complexity. In this paper we provide a closed-form simple GN-model correction which we show to be very effective in correcting for the GN-model tendency to overestimate non-linearity. Our formula also allows to clearly identify the correction dependence on key system parameters, such as the span length and loss.

  20. Correction.

    Science.gov (United States)

    2015-10-01

    In the article by Quintavalle et al (Quintavalle C, Anselmi CV, De Micco F, Roscigno G, Visconti G, Golia B, Focaccio A, Ricciardelli B, Perna E, Papa L, Donnarumma E, Condorelli G, Briguori C. Neutrophil gelatinase–associated lipocalin and contrast-induced acute kidney injury. Circ Cardiovasc Interv. 2015;8:e002673. DOI: 10.1161/CIRCINTERVENTIONS.115.002673.), which published online September 2, 2015, and appears in the September 2015 issue of the journal, a correction was needed. On page 1, the institutional affiliation for Elvira Donnarumma, PhD, “SDN Foundation,” has been changed to read, “IRCCS SDN, Naples, Italy.” The institutional affiliation for Laura Papa, PhD, “Institute for Endocrinology and Experimental Oncology, National Research Council, Naples, Italy,” has been changed to read, “Institute of Genetics and Biomedical Research, Milan Unit, Milan, Italy” and “Humanitas Research Hospital, Rozzano, Italy.” The authors regret this error.

  1. A Morphographemic Model for Error Correction in Nonconcatenative Strings

    CERN Document Server

    Bowden, T; Bowden, Tanya; Kiraz, George Anton

    1995-01-01

    This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.

  2. Applications of a curvature correction turbulent model for computations of unsteady cavitating flows

    Science.gov (United States)

    Zhao, Y.; Wang, G. Y.; Huang, B.; Hu, C. L.

    2015-01-01

    A Curvature Correction model (CCM) based on the original k-epsilon model is proposed to simulate unsteady cavitating flows. The objective of this study is to validate the CCM model and further investigate the unsteady vortex behaviors of cavitating flows around a Clark-Y hydrofoil. Compared with the original k-epsilon model, predicted results are improved in terms of the cavity detachment and hydrofoil fluctuations. Results show that streamline curvature correction of CCM model overcomes the over-predictions of turbulence kinetic energy and eddy viscosity in cavitating vertical region with the original k-epsilon model, which leads to better simulation abilities for the unsteady cavitating flow computations. Based on computations, it is proved that the vortex structure is significantly modified by the transient cavitation, especially with respect to the cavity shedding behaviors. Complex vortex interactions and corresponding cavity shedding process near hydrofoil trailing edge lead to various load frequencies.

  3. A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2009-09-01

    Full Text Available This paper presents a novel two-step autonomous navigation method for search and rescue robot. The algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM trained off-line with the texture feature and color features. And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper. The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM. The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment.

  4. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  5. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con

  6. Using individual differences to predict job performance: correcting for direct and indirect restriction of range.

    Science.gov (United States)

    Sjöberg, Sofia; Sjöberg, Anders; Näswall, Katharina; Sverke, Magnus

    2012-08-01

    The present study investigates the relationship between individual differences, indicated by personality (FFM) and general mental ability (GMA), and job performance applying two different methods of correction for range restriction. The results, derived by analyzing meta-analytic correlations, show that the more accurate method of correcting for indirect range restriction increased the operational validity of individual differences in predicting job performance and that this increase primarily was due to general mental ability being a stronger predictor than any of the personality traits. The estimates for single traits can be applied in practice to maximize prediction of job performance. Further, differences in the relative importance of general mental ability in relation to overall personality assessment methods was substantive and the estimates provided enables practitioners to perform a correct utility analysis of their overall selection procedure.

  7. A golden A{sub 5} model of leptons with a minimal NLO correction

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Iain K., E-mail: ikc1g08@soton.ac.uk; King, Stephen F., E-mail: king@soton.ac.uk; Stuart, Alexander J., E-mail: a.stuart@soton.ac.uk

    2013-10-21

    We propose a new A{sub 5} model of leptons which corrects the LO predictions of Golden Ratio mixing via a minimal NLO Majorana mass correction which completely breaks the original Klein symmetry of the neutrino mass matrix. The minimal nature of the NLO correction leads to a restricted and correlated range of the mixing angles allowing agreement within the one sigma range of recent global fits following the reactor angle measurement by Daya Bay and RENO. The minimal NLO correction also preserves the LO inverse neutrino mass sum rule leading to a neutrino mass spectrum that extends into the quasi-degenerate region allowing the model to be accessible to the current and future neutrinoless double beta decay experiments.

  8. Heisenberg coupling constant predicted for molecular magnets with pairwise spin-contamination correction

    Energy Technology Data Exchange (ETDEWEB)

    Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)

    2015-12-15

    New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.

  9. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.

    Science.gov (United States)

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-06-25

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  10. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    Directory of Open Access Journals (Sweden)

    Weihao Jiang

    2016-06-01

    Full Text Available Following the development of synthetic aperture radar (SAR, SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD model, a rational polynomial coefficients (RPC model, a revised polynomial (PM model and an elevation derivation (EDM model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  11. Model-Corrected Microwave Imaging through Periodic Wall Structures

    Directory of Open Access Journals (Sweden)

    Paul C. Chang

    2012-01-01

    Full Text Available A model-based imaging framework is applied to correct the target distortion seen in microwave imaging through a periodic wall structure. In addition to propagation delays caused by the wall, it is shown that the structural periodicity induces high-order space harmonics leading to other ghost artifacts in the through-wall image. To overcome these distortions, the periodic layer Green’s function is incorporated into the forward model. A linear back-projection solution and a nonlinear minimization solution are applied to solve the inverse problem. The model-based back-projection image corrects the distortion and has higher resolution compared with free space due to the inclusion of multipath propagation through the periodic wall, but considerable sidelobe clutter is present. The nonlinear solution not only corrects target distortion without clutter but also reduces the solution to a sparse form.

  12. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  13. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  14. Loop Corrections to Standard Model fields in inflation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Department of Physics, The University of Texas at Dallas,800 W Campbell Rd, Richardson, TX 75080 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2016-08-08

    We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.

  15. Prediction-correction alternating direction method for a class of constrained rain-max problems

    Institute of Scientific and Technical Information of China (English)

    LI Min; HE Bingsheng

    2007-01-01

    The problems concerned in this paper are a class of constrained min-max problems. By introducing the Lagrange multipliers to the linearconstraints, such problems can be solved by some projection type prediction-correction methods. However, to obtain components of the predictor one by one, we use an alternating direction method. And then the new iterate is generated by a minor correction. Global convergence of the proposed method is proved. Finally, numerical results for a constrained single-facility location problem are provided to verify that the new method is effective for some practical problems.

  16. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Directory of Open Access Journals (Sweden)

    J. Liebert

    2012-04-01

    Full Text Available Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC, i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated

  17. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  18. An Equilibrium-Correction Model for Dynamic Network Data

    NARCIS (Netherlands)

    D.J. Dekker (David); Ph.H.B.F. Franses (Philip Hans); D. Krackhardt (David)

    2001-01-01

    textabstractWe propose a two-stage MRQAP to analyze dynamic network data, within the framework of an equilibrium-correction (EC) model. Extensive simulation results indicate practical relevance of our method and its improvement over standard OLS. An empirical illustration additionally shows that the

  19. An equilibrium-correction model for dynamic network data

    NARCIS (Netherlands)

    R. Dekker (Rommert); Ph.H.B.F. Franses (Philip Hans); D. Krackhardt (David)

    2003-01-01

    textabstractWe propose a two-stage MRQAP to analyze dynamic network data, within the framework of an equilibrium-correction (EC) model. Extensive simulation results indicate practical relevance of our method and its improvement over standard OLS. An empirical illustration additionally shows that the

  20. Cloudy bag model IV. Pionic corrections to the nucleon properties

    Energy Technology Data Exchange (ETDEWEB)

    Theberge, S. (British Columbia Univ., Vancouver (Canada). Dept. of Physics); Miller, G.A. (Washington Univ., Seattle (USA). Dept. of Physics); Thomas, A.W. (British Columbia Univ., Vancouver (Canada). TRIUMF Facility)

    1982-01-01

    A detailed formulation of the Hamiltonian formalism, together with a consistent renormalization procedure, is described for the cloudy bag model. The electromagnetic properties of the nucleon are calculated with center-of-mass corrections included. Good agreement with the experimental results is obtained for bag radii ranging from 0.8 to 1.0 fm.

  1. Performance Analysis of OFDM with Frequency Offset and Correction Model

    Institute of Scientific and Technical Information of China (English)

    QIN Sheng-ping; YIN Chang-chuan; LUO Tao; YUE Guang-xin

    2003-01-01

    The performance of OFDM with frequency offset is analyzed and simulated in this paper. It is concluded that the SIR is very large and the BER of OFDM system with frequency offset is strongly affected. A BER calculating method is introduced and simulated. Assumed that the frequency offset is known, frequency offset correction model is discussed.

  2. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  3. Cathode design investigation based on iterative correction of predicted profile errors in electrochemical machining of compressor blades

    Institute of Scientific and Technical Information of China (English)

    Zhu Dong; Liu Cheng; Xu Zhengyang; Liu Jia

    2016-01-01

    Electrochemical machining (ECM) is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A math-ematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Further-more, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.

  4. Cathode design investigation based on iterative correction of predicted profile errors in electrochemical machining of compressor blades

    Directory of Open Access Journals (Sweden)

    Zhu Dong

    2016-08-01

    Full Text Available Electrochemical machining (ECM is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A mathematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Furthermore, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.

  5. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  6. Oblique corrections in the Dine-Fischler-Srednicki axion model

    Directory of Open Access Journals (Sweden)

    Katanaeva Alisa

    2016-01-01

    We discuss the Dine-Fischler-Srednicki (DFS model, which extends the two-Higgs doublet model with an additional Peccei-Quinn symmetry and leads to a physically acceptable axion. The non-linear parametrization of the DFS model is exploited in the generic case where all scalars except the lightest Higgs and the axion have masses at or beyond the TeV scale. We compute the oblique corrections and use their values from the electroweak experimental fits to put constraints on the mass spectrum of the DFS model.

  7. Evaluation and Correction of Quantitative Precipitation Forecast by Storm-Scale NWP Model in Jiangsu, China

    Directory of Open Access Journals (Sweden)

    Gaili Wang

    2016-01-01

    Full Text Available With the development of high-performance computer systems and data assimilation techniques, storm-scale numerical weather prediction (NWP models are gradually used for short-term deterministic forecasts. The primary objective of this study is to evaluate and correct precipitation forecasts of a storm-scale NWP model called the advanced regional prediction system (ARPS. The evaluation and correction consider five heavy precipitation events that occurred in the summer of 2015 in Jiangsu, China. The performances of the original and corrected ARPS precipitation forecasts are evaluated as a function of lead time using standard measurements and a spatial verification method called Structure-Amplitude-Location (SAL. In general, the ARPS could not produce optimal forecasts for very short lead times, and the forecast accuracy improves with increasing lead time. The ARPS overestimates precipitation for all lead times, which is confirmed by large bias in many forecasts in the first and second quadrant of the diagram of SAL, especially at the 1 h lead time. The amplitude correction is performed by matching percentile values of the ARPS precipitation forecasts and observations for each lead time. Amplitude correction significantly improved the ARPS precipitation forecasts in terms of the considered performance indices of standard measures and A-component and S-component of SAL.

  8. Electroweak Corrections and Unitarity in Linear Moose Models

    CERN Document Server

    Chivukula, R S; Kurachi, M; Simmons, E H; Tanabashi, M; He, Hong-Jian; Kurachi, Masafumi; Simmons, Elizabeth H.; Tanabashi, Masaharu

    2004-01-01

    We calculate the form of the corrections to the electroweak interactions in the class of Higgsless models which can be "deconstructed'' to a chain of SU(2) gauge groups adjacent to a chain of U(1) gauge groups, and with the fermions coupled to any single SU(2) group and to any single U(1) group along the chain. The primary advantage of our technique is that the size of corrections to electroweak processes can be directly related to the spectrum of vector bosons ("KK modes"). In Higgsless models, this spectrum is constrained by unitarity. Our methods also allow for arbitrary background 5-D geometry, spatially dependent gauge-couplings, and brane kinetic energy terms. We find that, due to the size of corrections to electroweak processes in any unitary theory, Higgsless models with localized fermions are disfavored by precision electroweak data. Although we stress our results as they apply to continuum Higgsless 5-D models, they apply to any linear moose model including those with only a few extra vector bosons....

  9. Analysis and Correction of Systematic Height Model Errors

    Science.gov (United States)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  10. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  11. Damping Functions correct over-dissipation of the Smagorinsky Model

    CERN Document Server

    Pakzad, Ali

    2016-01-01

    This paper studies the time-averaged energy dissipation rate $\\langle \\varepsilon_{SMD} (u)\\rangle$ for the combination of the Smagorinsky model and damping function. The Smagorinsky model is well known to over-damp. One common correction is to include damping functions that reduce the effects of model viscosity near walls. Mathematical analysis is given here that allows evaluation of $\\langle \\varepsilon_{SMD} (u)\\rangle $ for any damping function. Moreover, the analysis motivates a modified van Driest damping. It is proven that the combination of the Smagorinsky with this modified damping function does not over dissipate and is also consistent with Kolmogorov phenomenology.

  12. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  13. Bayesian Degree-Corrected Stochastic Block Models for Community Detection

    CERN Document Server

    Peng, Lijun

    2013-01-01

    Community detection in networks has drawn much attention in diverse fields, especially social sciences. Given its significance, there has been a large body of literature among which many are not statistically based. In this paper, we propose a novel stochastic blockmodel based on a logistic regression setup with node correction terms to better address this problem. We follow a Bayesian approach that explicitly captures the community behavior via prior specification. We then adopt a data augmentation strategy with latent Polya-Gamma variables to obtain posterior samples. We conduct inference based on a canonically mapped centroid estimator that formally addresses label non-identifiability. We demonstrate the novel proposed model and estimation on real-world as well as simulated benchmark networks and show that the proposed model and estimator are more flexible, representative, and yield smaller error rates when compared to the MAP estimator from classical degree-corrected stochastic blockmodels.

  14. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    Accurate predictions of the soil-gas diffusivity (Dp/Do, where Dp is the soil-gas diffusion coefficient and Do is the diffusion coefficient in free air) from easily measureable parameters like air-filled porosity (ε) and soil total porosity (φ) are valuable when predicting soil aeration...... and the emission of greenhouse gases and gaseous-phase contaminants from soils. Soil type (texture) and soil density (compaction) are two key factors controlling gas diffusivity in soils. We extended a recently presented density-corrected Dp(ε)/Do model by letting both model parameters (α and β) be interdependent...... and also functions of φ. The extension was based on literature measurements on Dutch and Danish soils ranging from sand to peat. The parameter α showed a promising linear relation to total porosity, while β also varied with α following a weak linear relation. The thus generalized density-corrected (GDC...

  15. Adaptive cyclic physiologic noise modeling and correction in functional MRI.

    Science.gov (United States)

    Beall, Erik B

    2010-03-30

    Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses.

  16. A hidden Markov model for prediction transmembrane helices in proteinsequences

    DEFF Research Database (Denmark)

    Sonnhammer, Erik L.L.; von Heijne, Gunnar; Krogh, Anders Stærmose

    1998-01-01

    and constraints involved. Models were estimated both by maximum likelihood and a discriminative method, and a method for reassignment of the membrane helix boundaries were developed. In a cross validated test on single sequences, our transmembrane HMM, TMHMM, correctly predicts the entire topology for 77...

  17. Surface-effect corrections for the solar model

    CERN Document Server

    Magic, Zazralt

    2016-01-01

    Solar p-mode oscillations exhibit a systematic offset towards higher frequencies due to shortcomings in the 1D stellar structure models, especially, the lack of turbulent pressure in the superadiabatic layers just below the optical surface, arising from the convective velocity field. We study the influence of the turbulent expansion, chemical composition, and magnetic fields on the stratification in the upper layers of the solar models in comparison with solar observations. Furthermore, we test alternative averages for improved results on the oscillation frequencies. We appended temporally and spatially averaged stratifications to 1D models to compute adiabatic oscillation frequencies that we then tested against observations. We also developed depth-dependent corrections for the solar 1D model, for which we expanded the geometrical depth to match the pressure stratification of the solar model, and we reduced the density that is caused by the turbulent pressure. We obtain the same results with our models a...

  18. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  19. Assessment of ten DFT methods in predicting structures of sheet silicates: importance of dispersion corrections.

    Science.gov (United States)

    Tunega, Daniel; Bučko, Tomáš; Zaoui, Ali

    2012-09-21

    The performance of ten density functional theory (DFT) methods in a prediction of the structure of four clay minerals, in which non-bonding interactions dominate in the layer stacking (dispersive forces in talc and pyrophyllite, and hydrogen bonds in lizardite and kaolinite), is reported. In a set of DFT methods following functionals were included: standard local and semi-local (LDA, PW91, PBE, and RPBE), dispersion corrected (PW91-D2, PBE-D2, RPBE-D2, and vdW-TS), and functionals developed specifically for solids and solid surfaces (PBEsol and AM05). We have shown that the standard DFT functionals fail in the correct prediction of the structural parameters, for which non-bonding interactions are important. The remarkable improvement leading to very good agreement with experimental structures is achieved if the dispersion corrections are included in the DFT calculations. In such cases the relative error for the most sensitive lattice vector c dropped below 1%. Very good performance was also observed for both DFT functionals developed for solids. Especially, the results achieved with the PBEsol are qualitatively similar to those with DFT-D2.

  20. A PREDICT-CORRECT NUMERICAL INTEGRATION SCHEME FOR SOLVING NONLINEAR DYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Fan Jianping; Huang Tao; Tang Chak-yin; Wang Cheng

    2006-01-01

    A new numerical integration scheme incorporating a predict-correct algorithm for solving the nonlinear dynamic systems was proposed in this paper. A nonlinear dynamic system governed by the equaton (v) = F(v, t) was transformed into the form as (v) = Hv+ f(v, t). The nonlinear part f(v, t) was then expanded by Taylor series and only the first-order term retained in the polynomial. Utilizing the theory of linear differential equation and the precise time-integration method, an exact solution for linearizing equation was obtained. In order to find the solution of the original system, a third-order interpolation polynomial of v was used and an equivalent nonlinear ordinary differential equation was regenerated. With a predicted solution as an initial value and an iteration scheme, a corrected result was achieved. Since the error caused by linearization could be eliminated in the correction process, the accuracy of calculation was improved greatly. Three engineering scenarios were used to assess the accuracy and reliability of the proposed method and the results were satisfactory.

  1. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  2. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  3. Off-Angle Iris Correction using a Biological Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    This work implements an eye model to simulate corneal refraction effects. Using this model, ray tracing is performed to calculate transforms to remove refractive effects in off-angle iris images when reprojected to a frontal view. The correction process is used as a preprocessing step for off-angle iris images for input to a commercial matcher. With this method, a match score distribution mean improvement of 11.65% for 30 degree images, 44.94% for 40 degree images, and 146.1% improvement for 50 degree images is observed versus match score distributions with unmodified images.

  4. Off-Angle Iris Correction using a Biological Model

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Joseph T [ORNL; Santos-Villalobos, Hector J [ORNL; Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    This work implements an eye model to simulate corneal refraction effects. Using this model, ray tracing is performed to calculate transforms to remove refractive effects in off-angle iris images when reprojected to a frontal view. The correction process is used as a preprocessing step for off-angle iris images for input to a commercial matcher. With this method, a match score distribution mean improvement of 11.65% for 30 degree images, 44.94% for 40 degree images, and 146.1% improvement for 50 degree images is observed versus match score distributions with unmodi ed images.

  5. Small populations corrections for selection-mutation models

    CERN Document Server

    Jabin, Pierre-Emmanuel

    2012-01-01

    We consider integro-differential models describing the evolution of a population structured by a quantitative trait. Individuals interact competitively, creating a strong selection pressure on the population. On the other hand, mutations are assumed to be small. Following the formalism of Diekmann, Jabin, Mischler, and Perthame, this creates concentration phenomena, typically consisting in a sum of Dirac masses slowly evolving in time. We propose a modification to those classical models that takes the effect of small populations into accounts and corrects some abnormal behaviours.

  6. A Novel Dynamic Bandwidth Allocation Algorithm with Correction-based the Multiple Traffic Prediction in EPON

    Directory of Open Access Journals (Sweden)

    Ziyi Fu

    2012-10-01

    Full Text Available According to the upstream TDM in the system of Ethernet passive optical network (EPON, this paper proposes a novel dynamic bandwidth allocation algorithm which supports the mechanism with correction-based the multiple services estimation. To improve the real-time performance of the bandwidth allocation, this algorithm forecasts the traffic of high priority services, and then pre-allocate bandwidth for various priority services is corrected according to Gaussian distribution characteristics, which will make traffic prediction closer to the real traffic. The simulation results show that proposed algorithm is better than the existing DBA algorithm. Not only can it meet the delay requirement of high priority services, but also control the delay abnormity of low priority services. In addition, with rectification scheme, it obviously improves the bandwidth utilization.

  7. ITG: A New Global GNSS Tropospheric Correction Model.

    Science.gov (United States)

    Yao, Yibin; Xu, Chaoqian; Shi, Junbo; Cao, Na; Zhang, Bao; Yang, Junjian

    2015-07-21

    Tropospheric correction models are receiving increasing attentions, as they play a crucial role in Global Navigation Satellite System (GNSS). Most commonly used models to date include the GPT2 series and the TropGrid2. In this study, we analyzed the advantages and disadvantages of existing models and developed a new model called the Improved Tropospheric Grid (ITG). ITG considers annual, semi-annual and diurnal variations, and includes multiple tropospheric parameters. The amplitude and initial phase of diurnal variation are estimated as a periodic function. ITG provides temperature, pressure, the weighted mean temperature (Tm) and Zenith Wet Delay (ZWD). We conducted a performance comparison among the proposed ITG model and previous ones, in terms of meteorological measurements from 698 observation stations, Zenith Total Delay (ZTD) products from 280 International GNSS Service (IGS) station and Tm from Global Geodetic Observing System (GGOS) products. Results indicate that ITG offers the best performance on the whole.

  8. Correcting biased observation model error in data assimilation

    CERN Document Server

    Harlim, John

    2016-01-01

    While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...

  9. arXiv Modeling NNLO jet corrections with neural networks

    CERN Document Server

    Carrazza, Stefano

    2017-01-01

    We present a preliminary strategy for modeling multidimensional distributions through neural networks. We study the efficiency of the proposed strategy by considering as input data the two-dimensional next-to-next leading order (NNLO) jet k-factors distribution for the ATLAS 7 TeV 2011 data. We then validate the neural network model in terms of interpolation and prediction quality by comparing its results to alternative models.

  10. Inflationary scenarios in Starobinsky model with higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Artymowski, Michał [Institute of Physics, Jagiellonian University,Łojasiewicza 11, 30-348 Kraków (Poland); Lalak, Zygmunt [Institute of Theoretical Physics, Faculty of Physics, University of Warsaw,ul. Pasteura 5, 02-093 Warsaw (Poland); Lewicki, Marek [Institute of Theoretical Physics, Faculty of Physics, University of Warsaw,ul. Pasteura 5, 02-093 Warsaw (Poland); Michigan Center for Theoretical Physics, University of Michigan,450 Church Street, Ann Arbor MI 48109 (United States)

    2015-06-17

    We consider the Starobinsky inflation with a set of higher order corrections parametrised by two real coefficients λ{sub 1} ,λ{sub 2}. In the Einstein frame we have found a potential with the Starobinsky plateau, steep slope and possibly with an additional minimum, local maximum or a saddle point. We have identified three types of inflationary behaviour that may be generated in this model: i) inflation on the plateau, ii) at the local maximum (topological inflation), iii) at the saddle point. We have found limits on parameters λ{sub i} and initial conditions at the Planck scale which enable successful inflation and disable eternal inflation at the plateau. We have checked that the local minimum away from the GR vacuum is stable and that the field cannot leave it neither via quantum tunnelling nor via thermal corrections.

  11. Modeling Polarized Solar Radiation for Correction of Satellite Data

    Science.gov (United States)

    Sun, W.

    2014-12-01

    Reflected solar radiation from the Earth-atmosphere system is polarized. If a non-polarimetric sensor has some polarization dependence, it can result in errors in the measured radiance. To correct the polarization-caused errors in satellite data, the polarization state of the reflected solar light must be known. In this presentation, recent studies of the polarized solar radiation from the ocean-atmosphere system with the adding-doubling radiative-transfer model (ADRTM) are reported. The modeled polarized solar radiation quantities are compared with PARASOL satellite measurements and DISORT model results. Sensitivities of reflected solar radiation's polarization to various ocean-surface and atmospheric conditions are addressed. A novel super-thin cloud detection method based on polarization measurements is also discussed. This study demonstrates that the modeling can provide a reliable approach for making the spectral Polarization Distribution Models (PDMs) for satellite inter-calibration applications of NASA's future Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Key words: Reflected solar radiation, polarization, correction of satellite data.

  12. Accurate Modeling of Organic Molecular Crystals by Dispersion-Corrected Density Functional Tight Binding (DFTB).

    Science.gov (United States)

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-06-05

    The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.

  13. On the importance of appropriate rain-gauge catch correction for hydrological modelling at mid to high latitudes

    Directory of Open Access Journals (Sweden)

    S. Stisen

    2012-03-01

    Full Text Available An existing rain gauge catch correction method addressing solid and liquid precipitation was applied both as monthly mean correction factors based on a 30 yr climatology (standard correction and as daily correction factors based on daily observations of wind speed and temperature (dynamic correction. The two methods resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation data sets were evaluated through the comprehensive Danish National Water Resources model (DK-Model revealing major differences in both model performance and optimized model parameter sets. Simulated stream discharge is improved significantly when introducing a dynamic precipitation correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimized model parameters are much more physically plausible for the model based on dynamic correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the dynamic correction. Similarly, the performances of the dynamic correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests. We conclude that dynamic precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes especially in coastal climates where winter precipitation type (solid/liquid fluctuate significantly

  14. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  15. Bias correction of temperature and precipitation data for regional climate model application to the Rhine basin

    Science.gov (United States)

    Terink, W.; Hurkmans, R. T. W. L.; Uijlenhoet, R.; Torfs, P. J. J. F.; Warmerdam, P. M. M.

    2009-04-01

    The Hydrology and Quantitative Water Management group of Wageningen University is involved in the EU research project NeWater. The objective of this project is to develop tools which provide medium range hydrological predictions by coupling catchment-scale water balance models and ensembles from mesoscale climate models. The catchment-scale distributed hydrological model used in this study is the Variable Infiltration Capacity (VIC) model. This hydrological model in combination with an ensemble from the climate model ECHAM5 (developed by Max Plank Institute für Meteorologie (MPI-M), Hamburg) is being used to evaluate the effects of climate change on the hydrological regime of the Rhine basin and to assess the uncertainties involved in the ensembles from the climate model used in this study. Three future scenarios (2001-2100) are used in this study, which are downscaled ECHAM5 runs which were forced by the IPCC carbon emission scenarios B1, A1B and A2. A downscaled ECHAM5 "Climate of the 20th Century" run (1951-2000) is used as the reference climate. Downscaled ERA15 data is used to calibrate the VIC model. Downscaling of both the ECHAM5 and ERA15 model was carried out with the regional climate model REMO at MPI-M to a resolution of 0.088 degrees. The assessment of uncertainties involved in the climate model ensembles is performed by comparing the model (ECHAM5-REMO and ERA15-REMO) ensemble precipitation and temperature data with observations. This resulted in the detection of a bias in both the downscaled reference climate data and downscaled ERA15 data. A bias-correction has been applied to both the downscaled ERA15 data and the reference climate data. This bias-correction corrects for the mean and coefficient of variation for precipitation and the mean and standard deviation for temperature. The results of the applied bias-correction are analyzed spatially and temporally. Despite the fact that the bias-correction only uses two parameters, the coefficient of

  16. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  17. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Directory of Open Access Journals (Sweden)

    J. Liebert

    2012-09-01

    Full Text Available Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with

  18. Predictive models reduce talent development costs in female gymnastics.

    Science.gov (United States)

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  19. ITER Side Correction Coil Quench model and analysis

    Science.gov (United States)

    Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.

    2016-12-01

    Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.

  20. Immediate postoperative outcome of orthognathic surgical planning, and prediction of positional changes in hard and soft tissue, independently of the extent and direction of the surgical corrections required

    DEFF Research Database (Denmark)

    Donatsky, Ole; Bjørn-Jørgensen, Jens; Hermund, Niels Ulrich;

    2011-01-01

    Our purpose was to evaluate the immediate postoperative outcome of preoperatively planned and predicted positional changes in hard and soft tissue in 100 prospectively and consecutively planned and treated patients; all had various dentofacial deformities that required single or double jaw...... orthognathic correction using the computerised, cephalometric, orthognathic, surgical planning system (TIOPS). Preoperative cephalograms were analysed and treatment plans and prediction tracings produced by computerised interactive simulation. The planned changes were transferred to models and finally...

  1. Two-loop electroweak threshold corrections in the Standard Model

    Directory of Open Access Journals (Sweden)

    Bernd A. Kniehl

    2015-07-01

    Full Text Available We study the relationships between the basic parameters of the on-shell renormalization scheme and their counterparts in the MS¯ scheme at full order O(α2 in the Standard Model. These enter as threshold corrections the renormalization group analyses underlying, e.g., the investigation of the vacuum stability. To ensure the gauge invariance of the parameters, in particular of the MS¯ masses, we work in Rξ gauge and systematically include tadpole contributions. We also consider the gaugeless-limit approximation and compare it with the full two-loop electroweak calculation.

  2. Quantum corrections in Higgs inflation: the Standard Model case

    CERN Document Server

    George, Damien P; Postma, Marieke

    2015-01-01

    We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a different treatment of the Goldstone bosons.

  3. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  4. Correction of gait after derotation osteotomies in cerebral palsy: Are the effects predictable?

    Science.gov (United States)

    Böhm, Harald; Hösl, Matthias; Dussa, Chacravarthy U; Döderlein, Leonhard

    2015-10-01

    Derotation osteotomies of the femur and tibia are established procedures to improve transverse plane deformities during walking with inwardly pointing knees and in- and out toeing gait. However, effects of femoral derotation osteotomies on gait were reported to be small, and those for the tibia are not known. Therefore, the aim of the study was to show the relation between the amount of intraoperative rotation and the changes during gait for osteotomies at femur and tibia levels, and predict those for the femur from preoperative clinical and gait data. Forty-four patients with spastic cerebral palsy between 6 and 19 years were included, 33 limbs received rotation only at the femur, 8 only at the tibia and 12 limbs at both levels. Gait analysis and clinical testing was performed pre- and 21.4 (SD=1.8) months postoperatively. The amount of intraoperative derotation of the femur showed no significant correlation with the change in hip rotation during walking (R=-0.17, p=0.25), whereas the rotation of the tibia showed an excellent relationship (R=0.84, pgait. Strength and passive range of motion in hip extension and abduction as well as hip extension or abduction or foot progression during walking did not show any predictive significance. In conclusion changes of knee rotation during gait is directly predictable from the amount of tibial corrections, contrary the change in hip rotation was not related to the amount of femoral derotation, and prediction was only fair.

  5. Precise correction to parameter ρ in the littlest Higgs model

    Institute of Scientific and Technical Information of China (English)

    Farshid Tabbak; F.Farnoudi

    2008-01-01

    In this paper tree-level violation of weak isospin parameter,ρ in the flame of the littlest Higgs model is studied.The potentially large deviation from the standard model prediction for the ρ in terms of the littlest Higgs model parameters is calculated.The maximum value for ρ for f = 1 TeV,c = 0.05,c'= 0.05and v'= 1.5 GeV is ρ = 1.2973 which means a large enhancement than the SM.

  6. Predictive 1-D thermal-hydraulic analysis of the prototype HTS current leads for the ITER correction coils

    Science.gov (United States)

    Heller, R.; Bauer, P.; Savoldi, L.; Zanino, R.; Zappatore, A.

    2016-12-01

    We present an analysis of the prototype high-temperature superconducting (HTS) current leads (CLs) for the ITER correction coils, which will operate at 10 kA. A copper heat exchanger (HX) of the meander-flow type is included in the CL design and covers the temperature range between room temperature and 65 K, whereas the HTS module, where Bi-2223 stacked tapes are positioned on the outer surface of a stainless steel hollow cylindrical support, covers the temperature range between 65 K and 4.5 K. The HX is cooled by gaseous helium entering at 50 K, whereas the HTS module is cooled by conduction from the cold end of the CL. We use the CURLEAD code, developed some years ago and now supplemented by a new set of correlations for the helium friction factor and heat transfer coefficient in the HX, recently derived using Computational Fluid Dynamics. Our analysis is aimed first of all at a "blind" design-like prediction of the CL performance, for both steady state and pulsed operation. In particular, the helium mass flow rate needed to guarantee the target temperature at the HX-HTS interface, the temperature profile, and the pressure drop across the HX will be computed. The predictive capabilities of the CURLEAD model are then assessed by comparison of the simulation results with experimental data obtained in the test of the prototype correction coil CLs at ASIPP, whose results were considered only after the simulations were performed.

  7. Traffic Prediction Scheme based on Chaotic Models in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xiangrong Feng

    2013-09-01

    Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.

  8. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  9. Oblique corrections in the Dine-Fischler-Srednicki axion model

    Science.gov (United States)

    Katanaeva, Alisa; Espriu, Domènec

    2016-11-01

    In the Minimal Standard Model (MSM) there is no degree of freedom for dark matter. There are several extensions of the MSM introducing a new particle - an invisible axion, which can be regarded as a trustworthy candidate at least for a part of the dark matter component. However, as it is extremely weakly coupled, it cannot be directly measured at the LHC. We propose to explore the electroweak sector indirectly by considering a particular model that includes the axion and derive consequences that could be experimentally tested. We discuss the Dine-Fischler-Srednicki (DFS) model, which extends the two-Higgs doublet model with an additional Peccei-Quinn symmetry and leads to a physically acceptable axion. The non-linear parametrization of the DFS model is exploited in the generic case where all scalars except the lightest Higgs and the axion have masses at or beyond the TeV scale. We compute the oblique corrections and use their values from the electroweak experimental fits to put constraints on the mass spectrum of the DFS model.

  10. Real time prediction and correction of ADCS problems in LEO satellites using fuzzy logic

    Directory of Open Access Journals (Sweden)

    Yassin Mounir Yassin

    2017-06-01

    Full Text Available This approach is concerned with adapting the operations of attitude determination and control subsystem (ADCS of low earth orbit LEO satellites through analyzing the telemetry readings received by mission control center, and then responding to ADCS off-nominal situations. This can be achieved by sending corrective operational Tele-commands within real time. Our approach is related to the fuzzy membership of off-nominal telemetry readings of corrective actions through a set of fuzzy rules based on understanding the ADCS modes resulted from the satellite telemetry readings. Response in real time gives us a chance to avoid risky situations. The approach is tested on the EgyptSat-1 engineering model, which is our method to simulate the results.

  11. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  12. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  13. One-loop radiative correction to the triple Higgs coupling in the Higgs singlet model

    Science.gov (United States)

    He, Shi-Ping; Zhu, Shou-hua

    2017-01-01

    Though the 125 GeV Higgs boson is consistent with the standard model (SM) prediction until now, the triple Higgs coupling can deviate from the SM value in the physics beyond the SM (BSM). In this paper, the radiative correction to the triple Higgs coupling is calculated in the minimal extension of the SM by adding a real gauge singlet scalar. In this model there are two scalars h and H and both of them are mixing states of the doublet and singlet. Provided that the mixing angle is set to be zero, namely the SM limit, h is the pure left-over of the doublet and its behavior is the same as that of the SM at the tree level. However the loop corrections can alter h-related couplings. In this SM limit case, the effect of the singlet H may show up in the h-related couplings, especially the triple h coupling. Our numerical results show that the deviation is sizable. For λΦS = 1 (see text for the parameter definition), the deviation δhhh(1) can be 40%. For λΦS = 1.5, the δhhh(1) can reach 140%. The sizable radiative correction is mainly caused by three reasons: the magnitude of the coupling λΦS, light mass of the additional scalar and the threshold enhancement. The radiative corrections for the hVV, hff couplings are from the counter-terms, which are the universal correction in this model and always at O(1%). The hZZ coupling, which can be precisely measured, may be a complementarity to the triple h coupling to search for the BSM. In the optimal case, the triple h coupling is very sensitive to the BSM physics, and this model can be tested at future high luminosity hadron colliders and electron-positron colliders.

  14. Transport Corrections in Nodal Diffusion Codes for HTR Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abderrafi M. Ougouag; Frederick N. Gleicher

    2010-08-01

    The cores and reflectors of High Temperature Reactors (HTRs) of the Next Generation Nuclear Plant (NGNP) type are dominantly diffusive media from the point of view of behavior of the neutrons and their migration between the various structures of the reactor. This means that neutron diffusion theory is sufficient for modeling most features of such reactors and transport theory may not be needed for most applications. Of course, the above statement assumes the availability of homogenized diffusion theory data. The statement is true for most situations but not all. Two features of NGNP-type HTRs require that the diffusion theory-based solution be corrected for local transport effects. These two cases are the treatment of burnable poisons (BP) in the case of the prismatic block reactors and, for both pebble bed reactor (PBR) and prismatic block reactor (PMR) designs, that of control rods (CR) embedded in non-multiplying regions near the interface between fueled zones and said non-multiplying zones. The need for transport correction arises because diffusion theory-based solutions appear not to provide sufficient fidelity in these situations.

  15. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  16. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology...

  17. A LQP BASED INTERIOR PREDICTION-CORRECTION METHOD FOR NONLINEAR COMPLEMENTARITY PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He; Li-zhi Liao; Xiao-ming Yuan

    2006-01-01

    To solve nonlinear complementarity problems (NCP), at each iteration, the classical proximal point algorithm solves a well-conditioned sub-NCP while the LogarithmicQuadratic Proximal (LQP) method solves a system of nonlinear equations (LQP system). This paper presents a practical LQP method-based prediction-correction method for NCP.The predictor is obtained via solving the LQP system approximately under significantly relaxed restriction, and the new iterate (the corrector) is computed directly by an explicit formula derived from the original LQP method. The implementations are very easy to be carried out. Global convergence of the method is proved under the same mild assumptions as the original LQP method. Finally, numerical results for traffic equilibrium problems are provided to verify that the method is effective for some practical problems.

  18. ALTERNATING PROJECTION BASED PREDICTION-CORRECTION METHODS FOR STRUCTURED VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He; Li-zhi Liao; Mai-jian Qian

    2006-01-01

    The monotone variational inequalities Ⅵ(Ω, F) have vast applications, including optimal controls and convex programming. In this paper we focus on the Ⅵ problems that have a particular splitting structure and in which the mapping F does not have an explicit form, therefore only its function values can be employed in the numerical methods for solving such problems. We study a set of numerical methods that are easily implementable.Each iteration of the proposed methods consists of two procedures. The first (prediction) procedure utilizes alternating projections to produce a predictor. The second (correction) procedure generates the new iterate via some minor computations. Convergence of the proposed methods is proved under mild conditions. Preliminary numerical experiments for some traffic equilibrium problems illustrate the effectiveness of the proposed methods.

  19. Consumer Choice Prediction: Artificial Neural Networks versus Logistic Models

    Directory of Open Access Journals (Sweden)

    Christopher Gan

    2005-01-01

    Full Text Available Conventional econometric models, such as discriminant analysis and logistic regression have been used to predict consumer choice. However, in recent years, there has been a growing interest in applying artificial neural networks (ANN to analyse consumer behaviour and to model the consumer decision-making process. The purpose of this paper is to empirically compare the predictive power of the probability neural network (PNN, a special class of neural networks and a MLFN with a logistic model on consumers’ choices between electronic banking and non-electronic banking. Data for this analysis was obtained through a mail survey sent to 1,960 New Zealand households. The questionnaire gathered information on the factors consumers’ use to decide between electronic banking versus non-electronic banking. The factors include service quality dimensions, perceived risk factors, user input factors, price factors, service product characteristics and individual factors. In addition, demographic variables including age, gender, marital status, ethnic background, educational qualification, employment, income and area of residence are considered in the analysis. Empirical results showed that both ANN models (MLFN and PNN exhibit a higher overall percentage correct on consumer choice predictions than the logistic model. Furthermore, the PNN demonstrates to be the best predictive model since it has the highest overall percentage correct and a very low percentage error on both Type I and Type II errors.

  20. Finite Population Correction for Two-Level Hierarchical Linear Models.

    Science.gov (United States)

    Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian

    2017-03-16

    The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. HEXAHEDRAL ELEMENT REFINEMENT FOR THE PREDICTION-CORRECTION ALE FEM SIMULATION OF 3D BULKING FORMING PROCESS

    Institute of Scientific and Technical Information of China (English)

    J. Chen; Y.X. Wang; W.P. Dong; X.Y. Ruan

    2004-01-01

    Based on the characteristics of 3D bulk forming process, the arbitrary Lagrangian-Eulerian (ALE)formulation-based FEM is studied, and a prediction-correction ALE-based FEM is proposed which integrates the advantages of precisely predicting the boundary configuration of the deformed material, and of efficiently avoiding hexahedron remeshing processes. The key idea of the prediction-correction ALE FEM is elaborated in detail. Accordingly, the strategy of mesh quality control, one of the key enabling techniques for the 3D bulk forming process numerical simulation by the prediction-correction ALE FEM is carefully investigated, and the algorithm for hexahedral element refinement is formulated based on the mesh distortion energy.

  2. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    Directory of Open Access Journals (Sweden)

    S. Stisen

    2012-11-01

    . We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid fluctuate significantly, causing climatological mean correction factors to be inadequate.

  3. Radiative corrections to the Higgs boson couplings in the triplet model

    CERN Document Server

    Aoki, Mayumi; Kikuchi, Mariko; Yagyu, Kei

    2012-01-01

    We calculate a full set of one-loop corrections to the Higgs boson coupling constants as well as the electroweak parameters. We compute the decay rate of the standard model (SM)-like Higgs boson ($h$) into diphoton. Renormalized Higgs couplings with the weak gauge bosons $hVV$ ($V=W$ and $Z$) and the trilinear coupling $hhh$ are also calculated at the one-loop level in the on-shell scheme. Magnitudes of the deviations in these quantities are evaluated in the parameter regions where the unitarity and vacuum stability bounds are satisfied and the predicted W boson mass at the one-loop level is consistent with the data. We find that there are strong correlations among deviations in the Higgs boson couplings $h\\gamma\\gamma$, $hVV$ and $hhh$. For example, if the event number of the $pp\\to h\\to\\gamma\\gamma$ channel deviates by +30% (-40%) from the SM prediction, deviations in the one-loop corrected $hVV$ and $hhh$ vertices are predicted about -0.1% (-2%) and -10% $(+150%)$, respectively. The model can be discrimina...

  4. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  5. Finite-temperature corrections in the dilated chiral quark model

    CERN Document Server

    Kim, Y; Rho, M; Kim, Youngman; Lee, Hyun Kyu; Rho, Mannque

    1995-01-01

    We calculate the finite-temperature corrections in the dilated chiral quark model using the effective potential formalism. Assuming that the dilaton limit is applicable at some short length scale, we interpret the results to represent the behavior of hadrons in dense {\\it and} hot matter. We obtain the scaling law, \\frac{f_{\\pi}(T)}{f_{\\pi}} = \\frac{m_Q (T)}{m_Q} \\simeq \\frac{m_{\\sigma}(T)}{m_{\\sigma}} while we argue, using PCAC, that pion mass does not scale within the temperature range involved in our Lagrangian. It is found that the hadron masses and the pion decay constant drop faster with temperature in the dilated chiral quark model than in the conventional linear sigma model that does not take into account the QCD scale anomaly. We attribute the difference in scaling in heat bath to the effect of baryonic medium on thermal properties of the hadrons. Our finding would imply that the AGS experiments (dense {\\it and} hot matter) and the RHIC experiments (hot and dilute matter) will ``see" different hadron...

  6. Structure Corrections in Modeling VLBI Delays for RDV Data

    Science.gov (United States)

    Sovers, Ojars J.; Charlot, Patrick; Fey, Alan L.; Gordon, David

    2002-01-01

    Since 1997, bimonthly S- and X-band observing sessions have been carried out employing the VLBA (Very Long Baseline Array) and as many as ten additional antennas. Maps of the extended structures have been generated for the 160 sources observed in ten of these experiments (approximately 200,000 observations) taking place during 1997 and 1998. This paper reports the results of the first massive application of such structure maps to correct the modeled VLBI (Very Long Baseline Interferometry) delay in astrometric data analysis. For high-accuracy celestial reference frame work, proper choice of a reference point within each extended source is crucial. Here the reference point is taken at the point of maximum emitted flux. Overall, the weighted delay residuals (approximately equal to 30 ps) are reduced by 8 ps in quadrature upon introducing source maps to model the structure delays of the sources. Residuals of some sources with extended or fast-varying structures improve by as much as 40 ps. Scatter of 'arc positions' about a time-linear model decreases substantially for most sources. Based on our results, it is also concluded that source structure is presently not the dominant error source in astrometric/geodetic VLBI.

  7. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  8. Universal Finite Size Corrections and the Central Charge in Non-solvable Ising Models

    Science.gov (United States)

    Giuliani, Alessandro; Mastropietro, Vieri

    2013-11-01

    We investigate a non-solvable two-dimensional ferromagnetic Ising model with nearest neighbor plus weak finite range interactions of strength λ. We rigorously establish one of the predictions of Conformal Field Theory (CFT), namely the fact that at the critical temperature the finite size corrections to the free energy are universal, in the sense that they are exactly independent of the interaction. The corresponding central charge, defined in terms of the coefficient of the first subleading term to the free energy, as proposed by Affleck and Blote-Cardy-Nightingale, is constant and equal to 1/2 for all and λ 0 a small but finite convergence radius. This is one of the very few cases where the predictions of CFT can be rigorously verified starting from a microscopic non solvable statistical model. The proof uses a combination of rigorous renormalization group methods with a novel partition function inequality, valid for ferromagnetic interactions.

  9. Joint PET-MR respiratory motion models for clinical PET motion correction

    Science.gov (United States)

    Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David

    2016-09-01

    Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.

  10. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  11. A variational method for correcting non-systematic errors in numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    SHAO AiMei; XI Shuang; QIU ChongJian

    2009-01-01

    A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the fore-casts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.

  12. A variational method for correcting non-systematic errors in numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the forecasts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.

  13. Modeling and Correcting the Time-Dependent ACS PSF

    Science.gov (United States)

    Rhodes, Jason; Massey, Richard; Albert, Justin; Taylor, James E.; Koekemoer, Anton M.; Leauthaud, Alexie

    2006-01-01

    The ability to accurately measure the shapes of faint objects in images taken with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) depends upon detailed knowledge of the Point Spread Function (PSF). We show that thermal fluctuations cause the PSF of the ACS Wide Field Camera (WFC) to vary over time. We describe a modified version of the TinyTim PSF modeling software to create artificial grids of stars across the ACS field of view at a range of telescope focus values. These models closely resemble the stars in real ACS images. Using 10 bright stars in a real image, we have been able to measure HST s apparent focus at the time of the exposure. TinyTim can then be used to model the PSF at any position on the ACS field of view. This obviates the need for images of dense stellar fields at different focus values, or interpolation between the few observed stars. We show that residual differences between our TinyTim models and real data are likely due to the effects of Charge Transfer Efficiency (CTE) degradation. Furthermore, we discuss stochastic noise that is added to the shape of point sources when distortion is removed, and we present MultiDrizzle parameters that are optimal for weak lensing science. Specifically, we find that reducing the MultiDrizzle output pixel scale and choosing a Gaussian kernel significantly stabilizes the resulting PSF after image combination, while still eliminating cosmic rays/bad pixels, and correcting the large geometric distortion in the ACS. We discuss future plans, which include more detailed study of the effects of CTE degradation on object shapes and releasing our TinyTim models to the astronomical community.

  14. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  15. Terahertz vibrations of crystalline acyclic and cyclic diglycine: benchmarks for London force correction models.

    Science.gov (United States)

    Juliano, Thomas R; Korter, Timothy M

    2013-10-10

    Terahertz spectroscopy provides direct information concerning weak intermolecular forces in crystalline molecular solids and therefore acts as an excellent method for calibrating and evaluating computational models for noncovalent interactions. In this study, the low-frequency vibrations of two dipeptides were compared, acyclic diglycine and cyclic diglycine, as benchmark systems for gauging the performance of semiempirical London force correction approaches. The diglycine samples were investigated using pulsed terahertz spectroscopy from 10 to 100 cm(-1) and then analyzed using solid-state density functional theory (DFT) augmented with existing London force corrections, as well as a new parametrization (DFT-DX) based on known experimental values. The two diglycine molecules provide a useful test for the applied models given their similarities, but more importantly the differences in the intermolecular forces displayed by each. It was found that all of the considered London force correction models were able to generate diglycine crystal structures of similar accuracy, but considerable variation occurred in their abilities to predict terahertz frequency vibrations. The DFT-DX parametrization was particularly successful in this investigation and shows promise for the improved analysis of low-frequency spectra.

  16. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  17. Density-Corrected Models for Gas Diffusivity and Air Permeability in Unsaturated Soil

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    Accurate prediction of gas diffusivity (Dp/Do) and air permeability (ka) and their variations with air-filled porosity (e) in soil is critical for simulating subsurface migration and emission of climate gases and organic vapors. Gas diffusivity and air permeability measurements from Danish soil...... in subsurface soil. The data were regrouped into four categories based on compaction (total porosity F 0.4 m3 m-3) and soil texture (volume-based content of clay, silt, and organic matter 15%). The results suggested that soil compaction more than soil type was the major control on gas...... diffusivity and to some extent also on air permeability. We developed a density-corrected (D-C) Dp(e)/Do model as a generalized form of a previous model for Dp/ Do at -100 cm H2O of matric potential (Dp,100/Do). The D-C model performed well across soil types and density levels compared with existing models...

  18. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  19. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  20. Coaching, Not Correcting: An Alternative Model for Minority Students

    Science.gov (United States)

    Dresser, Rocío; Asato, Jolynn

    2014-01-01

    The debate on the role of oral corrective feedback or "repair" in English instruction settings has been going on for over 30 years. Some educators believe that oral grammar correction is effective because they have noticed that students who learned a set of grammar rules were more likely to use them in real life communication (Krashen,…

  1. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  2. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  3. Spherical systems in models of nonlocally corrected gravity

    CERN Document Server

    Bronnikov, K A

    2009-01-01

    The properties of static, spherically symmetric configurations are considered in the framework of two models of nonlocally corrected gravity, suggested in S. Deser and R. Woodard., Phys. Rev. Lett. 663, 111301 (2007), and S. Capozziello et al., Phys. Lett. B 671, 193 (2009). For the first case, where the Lagrangian of nonlocal origin represents a scalar-tensor theory with two massless scalars, an explicit condition is found under which both scalars are canonical (non-phantom). If this condition does not hold, one of the fields exhibits a phantom behavior. Scalar-vacuum configurations then behave in a manner known for scalar-tensor theories. In the second case, the Lagrangian of nonlocal origin exhibits a scalar field interacting with the Gauss-Bonnet (GB) invariant and contains an arbitrary scalar field potential. It is found that the GB term, in general, leads to violation of the well-known no-go theorems valid for minimally coupled scalar fields in general relativity. It is shown, however, that some configu...

  4. A new simpler rotation/curvature correction method for Spalart-Allmaras turbulence model

    Institute of Scientific and Technical Information of China (English)

    Zhang Qiang; Yang Yong

    2013-01-01

    A new and much simpler rotation and curvature effects factor,which takes the form of Richardson number suggested by Hellsten originally for SST k-ω model,is presented for Spalart and Shur's rotation and curvature correction in the context of Spalart-Allmaras (SA) turbulence model.The new factor excludes the Lagrangian derivative of the strain rate tensor that exists in the SARC model,resulting in a simple,efficient and easy-to-implement approach for SA turbulence model (denoted as SARCM) to account for the effects of system rotation and curvature,techniquely.And then the SARCM is tested through turbulent curved wall flows:one is the flow over a zeropressure-gradient curved wall and the other is the channel flow in a duct with a U-turn.Predictions of the SARCM model are compared with experimental data and with the results obtained using original SA and SARC models.The numerical results show that SARCM can predict the rotation-curvature effects as accurately as SARC,but considerably more efficiently.Additionally,the accuracy of SARCM might strongly depend on the rotation-curvature model constants.Suggesting values for those constants are given,after some trials and errors.

  5. Exact finite-size corrections for the spanning-tree model under different boundary conditions

    Science.gov (United States)

    Izmailian, N. Sh.; Kenna, R.

    2015-02-01

    We express the partition functions of the spanning tree on finite square lattices under five different sets of boundary conditions in terms of a principal partition function with twisted-boundary conditions. Based on these expressions, we derive the exact asymptotic expansions of the logarithm of the partition function for each case. We have also established several groups of identities relating spanning-tree partition functions for the different boundary conditions. We also explain an apparent discrepancy between logarithmic correction terms in the free energy for a two-dimensional spanning-tree model with periodic and free-boundary conditions and conformal field theory predictions. We have obtained corner free energy for the spanning tree under free-boundary conditions in full agreement with conformal field theory predictions.

  6. Radiative corrections to the Yukawa couplings in two Higgs doublet models

    CERN Document Server

    Kikuchi, Mariko

    2014-01-01

    A pattern of deviations in coupling constants of Standard Model (SM)-like Higgs boson from their SM predictions indicates characteristics of an extended Higgs sector. In particular, Yukawa coupling constants can deviate in different patterns in four types of Two Higgs Doublet Models (THDMs) with a softly-broken Z_2 symmetry. We can discriminate types of THDMs by measuring the pattern of these deviations. We calculate Yukawa coupling constants of the SM-like Higgs boson with radiative corrections in all types of Yukawa interactions in order to compare to future precision data at the International Linear Collider (ILC). We perform numerical computations of scale factors, and evaluate differences between the Yukawa couplings in THDMs and those of the SM at the one-loop level. We find that scale factors in different types of THDMs do not overlap each other even in the case with maximum radiative corrections if gauge couplings are different from the SM predictions large enough to be measured at the ILC. Therefore,...

  7. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  8. Prediction of d^0 magnetism in self-interaction corrected density functional theory

    Science.gov (United States)

    Das Pemmaraju, Chaitanya

    2010-03-01

    Over the past couple of years, the phenomenon of ``d^0 magnetism'' has greatly intrigued the magnetism community [1]. Unlike conventional magnetic materials, ``d^0 magnets'' lack any magnetic ions with open d or f shells but surprisingly, exhibit signatures of ferromagnetism often with a Curie temperature exceeding 300 K. Current research in the field is geared towards trying to understand the mechanism underlying this observed ferromagnetism which is difficult to explain within the conventional m-J paradigm [1]. The most widely studied class of d^0 materials are un-doped and light element doped wide gap Oxides such as HfO2, MgO, ZnO, TiO2 all of which have been put forward as possible d0 ferromagnets. General experimental trends suggest that the magnetism is a feature of highly defective samples leading to the expectation that the phenomenon must be defect related. In particular, based on density functional theory (DFT) calculations acceptor defects formed from the O-2p states in these Oxides have been proposed as being responsible for the ferromagnetism [2,3]. However. predicting magnetism originating from 2p orbitals is a delicate problem, which depends on the subtle interplay between covalency and Hund's coupling. DFT calculations based on semi-local functionals such as the local spin-density approximation (LSDA) can lead to qualitative failures on several fronts. On one hand the excessive delocalization of spin-polarized holes leads to half-metallic ground states and the expectation of room-temperature ferromagnetism. On the other hand, in some cases a magnetic ground state may not be predicted at all as the Hund's coupling might be under estimated. Furthermore, polaronic distortions which are often a feature of acceptor defects in Oxides are not predicted [4,5]. In this presentation, we argue that the self interaction error (SIE) inherent to semi-local functionals is responsible for the failures of LSDA and demonstrate through various examples that beyond

  9. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  10. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  11. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  12. On the Accuracy of the GN-Model and on Analytical Correction Terms to Improve It

    CERN Document Server

    Carena, A; Curri, V; Jiang, Y; Poggiolini, P; Forghieri, F

    2014-01-01

    The GN-model has been proposed to provide an approximate but sufficiently accurate tool for predicting uncompensated optical system performance, in realistic scenarios. For this specific use, the GN-model has enjoyed substantial validation, both simulative and experimental. Recently, however, it has been pointed out that its predictions, when used to obtain a detailed physical picture of non-linear noise accumulation along a link, may be affected by substantial error. In addition, it has been pointed out that part of the non-linear interference (NLI) noise variance that it predicts may be ascribed to long-correlation phase noise rather than quasi-additive Gaussian noise. We analyze in detail the GN-model errors and the analytical correction terms that have been proposed to remove them. We extend such analytical results to single-channel non-linearity, and provide integral formulas for both single and cross-channel effects. We carry out a simulative in-depth characterization of phase noise in realistic links. ...

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  14. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  15. Correction of approximation errors with Random Forests applied to modelling of aerosol first indirect effect

    Directory of Open Access Journals (Sweden)

    A. Lipponen

    2013-04-01

    Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.

  16. Predictive factors for perioperative blood transfusion in surgeries for correction of idiopathic, neuromuscular or congenital scoliosis

    Directory of Open Access Journals (Sweden)

    Alexandre Fogaça Cristante

    2014-12-01

    Full Text Available OBJECTIVE: To evaluate the association of clinical and demographic variables in patients requiring blood transfusion during elective surgery to treat scoliosis with the aim of identifying markers predictive of the need for blood transfusion. METHODS: Based on the review of medical charts at a public university hospital, this retrospective study evaluated whether the following variables were associated with the need for red blood cell transfusion (measured by the number of packs used during scoliosis surgery: scoliotic angle, extent of arthrodesis (number of fused levels, sex of the patient, surgery duration and type of scoliosis (neuromuscular, congenital or idiopathic. RESULTS: Of the 94 patients evaluated in a 55-month period, none required a massive blood transfusion (most patients needed less than two red blood cell packs. The number of packs was not significantly associated with sex or type of scoliosis. The extent of arthrodesis (r = 0.103, surgery duration (r = 0.144 and scoliotic angle (r = 0.004 were weakly correlated with the need for blood transfusion. Linear regression analysis showed an association between the number of spine levels submitted to arthrodesis and the volume of blood used in transfusions (p = 0.001. CONCLUSION: This study did not reveal any evidence of a significant association between the need for red blood cell transfusion and scoliotic angle, sex or surgery duration in scoliosis correction surgery. Submission of more spinal levels to arthrodesis was associated with the use of a greater number of blood packs.

  17. Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill.

    Science.gov (United States)

    Raftery, Adrian E; Kárný, Miroslav; Ettler, Pavel

    2010-02-01

    We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the "correct" model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

  18. The Asian Correction Can Be Quantitatively Forecasted Using a Statistical Model of Fusion-Fission Processes

    Science.gov (United States)

    Teh, Boon Kin; Cheong, Siew Ann

    2016-01-01

    The Global Financial Crisis of 2007-2008 wiped out US$37 trillions across global financial markets, this value is equivalent to the combined GDPs of the United States and the European Union in 2014. The defining moment of this crisis was the failure of Lehman Brothers, which precipitated the October 2008 crash and the Asian Correction (March 2009). Had the Federal Reserve seen these crashes coming, they might have bailed out Lehman Brothers, and prevented the crashes altogether. In this paper, we show that some of these market crashes (like the Asian Correction) can be predicted, if we assume that a large number of adaptive traders employing competing trading strategies. As the number of adherents for some strategies grow, others decline in the constantly changing strategy space. When a strategy group grows into a giant component, trader actions become increasingly correlated and this is reflected in the stock price. The fragmentation of this giant component will leads to a market crash. In this paper, we also derived the mean-field market crash forecast equation based on a model of fusions and fissions in the trading strategy space. By fitting the continuous returns of 20 stocks traded in Singapore Exchange to the market crash forecast equation, we obtain crash predictions ranging from end October 2008 to mid-February 2009, with early warning four to six months prior to the crashes. PMID:27706198

  19. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Dispersion corrections in graphenic systems: a simple and effective model of binding.

    Science.gov (United States)

    Gould, Tim; Lebègue, S; Dobson, John F

    2013-11-06

    We combine high-level theoretical and ab initio understanding of graphite to develop a simple, parametrized force-field model of interlayer binding in graphite, including the difficult non-pairwise-additive coupled-fluctuation dispersion interactions. The model is given as a simple additive correction to standard density functional theory (DFT) calculations, of form ΔU(D) = f(D)[U(vdW)(D) - U(DFT)(D)] where D is the interlayer distance. The functions are parametrized by matching contact properties, and long-range dispersion to known values, and the model is found to accurately match high-level ab initio results for graphite across a wide range of D values. We employ the correction on the bigraphene binding and graphite exfoliation problems, as well as lithium intercalated graphite LiC6. We predict the binding energy of bigraphene to be 0.27 J m(-2), and the exfoliation energy of graphite to be 0.31 J m(-2), respectively slightly less and slightly more than the bulk layer binding energy 0.295 J m(-2)/layer. Material properties of LiC6 are found to be essentially unchanged compared to the local density approximation. This is appropriate in view of the relative unimportance of dispersion interactions for LiC6 layer binding.

  12. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  13. Improving active space telescope wavefront control using predictive thermal modeling

    Science.gov (United States)

    Gersh-Range, Jessica; Perrin, Marshall D.

    2015-01-01

    Active control algorithms for space telescopes are less mature than those for large ground telescopes due to differences in the wavefront control problems. Active wavefront control for space telescopes at L2, such as the James Webb Space Telescope (JWST), requires weighing control costs against the benefits of correcting wavefront perturbations that are a predictable byproduct of the observing schedule, which is known and determined in advance. To improve the control algorithms for these telescopes, we have developed a model that calculates the temperature and wavefront evolution during a hypothetical mission, assuming the dominant wavefront perturbations are due to changes in the spacecraft attitude with respect to the sun. Using this model, we show that the wavefront can be controlled passively by introducing scheduling constraints that limit the allowable attitudes for an observation based on the observation duration and the mean telescope temperature. We also describe the implementation of a predictive controller designed to prevent the wavefront error (WFE) from exceeding a desired threshold. This controller outperforms simpler algorithms even with substantial model error, achieving a lower WFE without requiring significantly more corrections. Consequently, predictive wavefront control based on known spacecraft attitude plans is a promising approach for JWST and other future active space observatories.

  14. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  15. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  16. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  17. Boolean network model predicts knockout mutant phenotypes of fission yeast.

    Directory of Open Access Journals (Sweden)

    Maria I Davidich

    Full Text Available BOOLEAN NETWORKS (OR: networks of switches are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus.

  18. Boolean Network Model Predicts Knockout Mutant Phenotypes of Fission Yeast

    Science.gov (United States)

    Davidich, Maria I.; Bornholdt, Stefan

    2013-01-01

    Boolean networks (or: networks of switches) are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus. PMID:24069138

  19. The quark mean field model with pion and gluon corrections

    CERN Document Server

    Xing, Xueyong; Shen, Hong

    2016-01-01

    The properties of nuclear matter and finite nuclei are studied within the quark mean field (QMF) model by taking the effects of pion and gluon into account at the quark level. The nucleon is described as the combination of three constituent quarks confined by a harmonic oscillator potential. To satisfy the spirit of QCD theory, the contributions of pion and gluon on the nucleon structure are treated in second-order perturbation theory. For the nuclear many-body system, nucleons interact with each other by exchanging mesons between quarks. With different constituent quark mass, $m_q$, we determine three parameter sets about the coupling constants between mesons and quarks, named as QMF-NK1, QMF-NK2, and QMF-NK3 by fitting the ground-state properties of several closed-shell nuclei. It is found that all of the three parameter sets can give satisfactory description on properties of nuclear matter and finite nuclei, meanwhile they can also predict the larger neutron star mass around $2.3M_\\odot$ without the hypero...

  20. Quark mean field model with pion and gluon corrections

    Science.gov (United States)

    Xing, Xueyong; Hu, Jinniu; Shen, Hong

    2016-10-01

    The properties of nuclear matter and finite nuclei are studied within the quark mean field (QMF) model by taking the effects of pions and gluons into account at the quark level. The nucleon is described as the combination of three constituent quarks confined by a harmonic oscillator potential. To satisfy the spirit of QCD theory, the contributions of pions and gluons on the nucleon structure are treated in second-order perturbation theory. In a nuclear many-body system, nucleons interact with each other by exchanging mesons between quarks. With different constituent quark mass, mq, we determine three parameter sets for the coupling constants between mesons and quarks, named QMF-NK1, QMF-NK2, and QMF-NK3, by fitting the ground-state properties of several closed-shell nuclei. It is found that all of the three parameter sets can give a satisfactory description of properties of nuclear matter and finite nuclei, moreover they also predict a larger neutron star mass around 2.3 M⊙ without hyperon degrees of freedom.

  1. Ionospheric Correction Based on Ingestion of Global Ionospheric Maps into the NeQuick 2 Model

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-01-01

    Full Text Available The global ionospheric maps (GIMs, generated by Jet Propulsion Laboratory (JPL and Center for Orbit Determination in Europe (CODE during a period over 13 years, have been adopted as the primary source of data to provide global ionospheric correction for possible single frequency positioning applications. The investigation aims to assess the performance of new NeQuick model, NeQuick 2, in predicting global total electron content (TEC through ingesting the GIMs data from the previous day(s. The results show good performance of the GIMs-driven-NeQuick model with average 86% of vertical TEC error less than 10 TECU, when the global daily effective ionization indices (Az versus modified dip latitude (MODIP are constructed as a second order polynomial. The performance of GIMs-driven-NeQuick model presents variability with solar activity and behaves better during low solar activity years. The accuracy of TEC prediction can be improved further through performing a four-coefficient function expression of Az versus MODIP. As more measurements from earlier days are involved in the Az optimization procedure, the accuracy may decrease. The results also reveal that more efforts are needed to improve the NeQuick 2 model capabilities to represent the ionosphere in the equatorial and high-latitude regions.

  2. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  3. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  4. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple and...

  5. Varactor Modelling for Power Factor Correction in a Varying Load

    Directory of Open Access Journals (Sweden)

    Agwu D. D.

    2016-06-01

    Full Text Available : For efficient system operation, it is desirable to keep the power factor at, or very close to unity. One of the very often used methods is application of suitable power factor correction technology. Capacitors are good candidate for constant load power factor correction due to suitability and cost effectiveness. However for varying loads, synchronous condensers are preferred despite having high initial cost as a result of their being able to supply varying leading or lagging reactive power; according to their field excitation. Due to the high acquisition and operation cost of synchronous condensers, this paper presents varactors as a possible alternative for power factor correction. These are diodes that vary their capacitances and leading reactive power according to supply voltage. Applying this involves looking at variation of power factor with supply voltage; and the option of aggregating and harnessing the junction capacitance of varactors for power factor correction of varying loads at low voltage AC levels. This innovation may lead to great improvement in distribution systems requiring quality power supply

  6. Determining Model Correctness for Situations of Belief Fusion

    Science.gov (United States)

    2013-07-01

    cinema together it would seem strange to say that one specific movie is more true than another. However, in this case the term truth can interpreted in...which means that no state is correct. An example is when two persons try to agree on seeing a movie at the cinema . If their preferences include some

  7. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  8. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  9. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  10. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  11. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  12. Correcting circulation biases in a lower-resolution global general circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander; Beckers, Jean-Marie

    2016-12-01

    In this study, we aim at developing a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias by directly adding an additional source term into the model equations. This method is presented and tested first with a twin experiment on a fully controlled Lorenz '96 model. It is then applied to the lower-resolution global circulation NEMO-LIM2 model, with both a twin experiment and a real case experiment. Sea surface height observations are used to create a forcing to correct the poorly located and estimated currents. Validation is then performed throughout the use of other variables such as sea surface temperature and salinity. Results show that the method is able to consistently correct part of the model bias. The bias correction term is presented and is consistent with the limitations of the global circulation model causing bias on the oceanic currents.

  13. Correcting circulation biases in a lower-resolution global general circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander; Beckers, Jean-Marie

    2017-02-01

    In this study, we aim at developing a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias by directly adding an additional source term into the model equations. This method is presented and tested first with a twin experiment on a fully controlled Lorenz '96 model. It is then applied to the lower-resolution global circulation NEMO-LIM2 model, with both a twin experiment and a real case experiment. Sea surface height observations are used to create a forcing to correct the poorly located and estimated currents. Validation is then performed throughout the use of other variables such as sea surface temperature and salinity. Results show that the method is able to consistently correct part of the model bias. The bias correction term is presented and is consistent with the limitations of the global circulation model causing bias on the oceanic currents.

  14. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  15. The SAMI Galaxy Survey: Can we trust aperture corrections to predict star formation?

    CERN Document Server

    Richards, Samuel Nathan; Croom, Scott; Hopkins, Andrew; Schaefer, Adam; Bland-Hawthorn, Joss; Allen, James; Brough, Sarah; Cecil, Gerald; Cortese, Luca; Fogarty, Lisa; Gunawardhana, Madusha; Goodwin, Michael; Green, Andrew; Ho, I-Ting; Kewley, Lisa; Konstantopoulos, Iraklis; Lawrence, Jon; Lorente, Nuria; Medling, Anne; Owers, Matt; Sharp, Rob; Sweet, Sarah; Taylor, Edward

    2015-01-01

    In the low redshift Universe (z<0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broadband imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey s...

  16. Heart rate-corrected QT interval helps predict mortality after intentional organophosphate poisoning.

    Directory of Open Access Journals (Sweden)

    Shou-Hsuan Liu

    Full Text Available INTRODUCTION: In this study, we investigated the outcomes for patients with intentional organophosphate poisoning. Previous reports indicate that in contrast to normal heart rate-corrected QT intervals (QTc, QTc prolongation might be indicative of a poor prognosis for patients exposed to organophosphates. METHODS: We analyzed the records of 118 patients who were referred to Chang Gung Memorial Hospital for management of organophosphate poisoning between 2000 and 2011. Patients were grouped according to their initial QTc interval, i.e., normal (0.44 s. Demographic, clinical, laboratory, and mortality data were obtained for analysis. RESULTS: The incidence of hypotension in patients with prolonged QTc intervals was higher than that in the patients with normal QTc intervals (P = 0.019. By the end of the study, 18 of 118 (15.2% patients had died, including 3 of 75 (4.0% patients with normal QTc intervals and 15 of 43 (34.9% patients with prolonged QTc intervals. Using multivariate-Cox-regression analysis, we found that hypotension (OR = 10.930, 95% CI = 2.961-40.345, P = 0.000, respiratory failure (OR = 4.867, 95% CI = 1.062-22.301, P = 0.042, coma (OR = 3.482, 95% CI = 1.184-10.238, P = 0.023, and QTc prolongation (OR = 7.459, 95% CI = 2.053-27.099, P = 0.002 were significant risk factors for mortality. Furthermore, it was revealed that non-survivors not only had longer QTc interval (503.00±41.56 versus 432.71±51.21 ms, P = 0.002, but also suffered higher incidences of hypotension (83.3 versus 12.0%, P = 0.000, shortness of breath (64 versus 94.4%, P = 0.010, bronchorrhea (55 versus 94.4%, P = 0.002, bronchospasm (50.0 versus 94.4%, P = 0.000, respiratory failure (94.4 versus 43.0%, P = 0.000 and coma (66.7 versus 11.0%, P = 0.000 than survivors. Finally, Kaplan-Meier analysis demonstrated that cumulative mortality was higher among patients with prolonged QTc

  17. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    Science.gov (United States)

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  18. Prediction model based on decision tree analysis for laccase mediators.

    Science.gov (United States)

    Medina, Fabiola; Aguila, Sergio; Baratto, Maria Camilla; Martorana, Andrea; Basosi, Riccardo; Alderete, Joel B; Vazquez-Duhalt, Rafael

    2013-01-10

    A Structure Activity Relationship (SAR) study for laccase mediator systems was performed in order to correctly classify different natural phenolic mediators. Decision tree (DT) classification models with a set of five quantum-chemical calculated molecular descriptors were used. These descriptors included redox potential (ɛ°), ionization energy (E(i)), pK(a), enthalpy of formation of radical (Δ(f)H), and OH bond dissociation energy (D(O-H)). The rationale for selecting these descriptors is derived from the laccase-mediator mechanism. To validate the DT predictions, the kinetic constants of different compounds as laccase substrates, their ability for pesticide transformation as laccase-mediators, and radical stability were experimentally determined using Coriolopsis gallica laccase and the pesticide dichlorophen. The prediction capability of the DT model based on three proposed descriptors showed a complete agreement with the obtained experimental results. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  20. A correction factor to f-chart predictions of active solar fraction in active-passive heating systems

    Science.gov (United States)

    Evans, B. L.; Beckman, W. A.; Duffie, J. A.; Mitchell, J. W.; Klein, S. A.

    1983-11-01

    The extent to which a passive system degrades the performance of an active solar space heating system was investigated, and a correction factor to account for these interactions was developed. The transient system simulation program TRNSYS is used to simulate the hour-by-hour performance of combined active-passive (hybrid) space heating systems in order to compare the active system performance with simplified design method predictions. The TRNSYS simulations were compared to results obtained using the simplified design calculations of the f-Chart method. Comparisons of TRNSYS and f-Chart were used to establish the accuracy of the f-Charts for active systems. A correlation was then developed to correct the monthly loads input into the f-Chart method to account for controller deadbands in both hybrid and active only buildings. A general correction factor was generated to be applied to the f-Chart method to produce more accurate and useful results for hybrid systems.

  1. Information retrieval for OCR documents: a content-based probabilistic correction model

    Science.gov (United States)

    Jin, Rong; Zhai, ChangXiang; Hauptmann, Alexander

    2003-01-01

    The difficulty with information retrieval for OCR documents lies in the fact that OCR documents contain a significant amount of erroneous words and unfortunately most information retrieval techniques rely heavily on word matching between documents and queries. In this paper, we propose a general content-based correction model that can work on top of an existing OCR correction tool to "boost" retrieval performance. The basic idea of this correction model is to exploit the whole content of a document to supplement any other useful information provided by an existing OCR correction tool for word corrections. Instead of making an explicit correction decision for each erroneous word as typically done in a traditional approach, we consider the uncertainties in such correction decisions and compute an estimate of the original "uncorrupted" document language model accordingly. The document language model can then be used for retrieval with a language modeling retrieval approach. Evaluation using the TREC standard testing collections indicates that our method significantly improves the performance compared with simple word correction approaches such as using only the top ranked correction.

  2. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  3. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  4. AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS

    Institute of Scientific and Technical Information of China (English)

    LiYanxing; HuXinkang; ShuaiPing; ZhangZhongfu

    2003-01-01

    The propagation path of satellite signals in the atmosphere is a curve thus it,is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°,the accuracy of the correction exceeds 0.06mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z>50°,the correction is smaller than 0.5 mm and can be neglected. When Z>50°, the correction must be made. When Z is 85°, 88° and 89° , the corrections are 198mm, 8.911m and 28.497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80°, but too small when Z=89°. The expression in this paper is applicable to any satellite.

  5. AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS

    Institute of Scientific and Technical Information of China (English)

    Li Yanxing; Hu Xinkang; Shuai Ping; Zhang Zhongfu

    2003-01-01

    The propagation path of satellite signals in the atmosphere is a curve thus it.is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°, the accuracy of the correction exceeds 0.06 mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z>50°,the correction is smaller than 0.5 mm and can be neglected.When Z>50°, the correction must be made. When Z is 85° , 88° and 89° , the corrections are 198mm, 8. 911 m and 28. 497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80 °, but too small when Z=89°. The expression in this paper is applicable to any satellite.

  6. Prediction models of prevalent radiographic vertebral fractures among older men.

    Science.gov (United States)

    Schousboe, John T; Rosen, Harold R; Vokes, Tamara J; Cauley, Jane A; Cummings, Steven R; Nevitt, Michael C; Black, Dennis M; Orwoll, Eric S; Kado, Deborah M; Ensrud, Kristine E

    2014-01-01

    No studies have compared how well different prediction models discriminate older men who have a radiographic prevalent vertebral fracture (PVFx) from those who do not. We used area under receiver operating characteristic curves and a net reclassification index to compare how well regression-derived prediction models and nonregression prediction tools identify PVFx among men age ≥65 yr with femoral neck T-score of -1.0 or less enrolled in the Osteoporotic Fractures in Men Study. The area under receiver operating characteristic for a model with age, bone mineral density, and historical height loss (HHL) was 0.682 compared with 0.692 for a complex model with age, bone mineral density, HHL, prior non-spine fracture, body mass index, back pain, grip strength, smoking, and glucocorticoid use (p values for difference in 5 bootstrapped samples 0.14-0.92). This complex model, using a cutpoint prevalence of 5%, correctly reclassified only a net 5.7% (p = 0.13) of men as having or not having a PVFx compared with a simple criteria list (age ≥ 80 yr, HHL >4 cm, or glucocorticoid use). In conclusion, simple criteria identify older men with PVFx and regression-based models. Future research to identify additional risk factors that more accurately identify older men with PVFx is needed.

  7. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  8. Quantum correction to tiny vacuum expectation value in two Higgs doublet model for Dirac neutrino mass

    CERN Document Server

    Morozumi, Takuya; Tamai, Kotaro

    2011-01-01

    We study a Dirac neutrino mass model of Davidson and Logan. In the model, the smallness of the neutrino mass is originated from the small vacuum expectation value of the second Higgs of two Higgs doublets. We study the one loop effective potential of the Higgs sector and examine how the small vacuum expectation is stable under the radiative correction. By deriving formulae of the radiative correction, we numerically study how large the one loop correction is and show how it depends on the quadratic mass terms and quartic couplings of the Higgs potential. The correction changes depending on the various scenarios for extra Higgs mass spectrum.

  9. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  10. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  11. Prediction models in in vitro fertilization; where are we? A mini review

    Directory of Open Access Journals (Sweden)

    Laura van Loendersloot

    2014-05-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in 1978, over five million babies have been born worldwide using IVF. Contrary to the perception of many, IVF does not guarantee success. Almost 50% of couples that start IVF will remain childless, even if they undergo multiple IVF cycles. The decision to start or pursue with IVF is challenging due to the high cost, the burden of the treatment, and the uncertain outcome. In optimal counseling on chances of a pregnancy with IVF, prediction models may play a role, since doctors are not able to correctly predict pregnancy chances. There are three phases of prediction model development: model derivation, model validation, and impact analysis. This review provides an overview on predictive factors in IVF, the available prediction models in IVF and provides key principles that can be used to critically appraise the literature on prediction models in IVF. We will address these points by the three phases of model development.

  12. Harmonic Decomposition Tidal Analysis and Prediction Based on Astronomical Arguments and Nodal Corrections in Persian Gulf, Iran

    Directory of Open Access Journals (Sweden)

    Nasser Najibi

    2013-07-01

    Full Text Available The establishment and maintenance of marine structures and near-shore constructions together require having sufficient and accurate information about sea level variations not only in the present time but also in the near future as a reliable prediction. It is therefore necessary to analyze and predict Mean Sea Level (MSL for a specific time considering all possible effects which may modify the accuracy and precision of the results. This study presents tidal harmonic decomposition solutions based on the first and second method of solving the Fourier series to analyze of the tides in January 2010 hourly and predict for the whole days of 2012 year considering the astronomical arguments and nodal corrections in Bandar-e-Abbas, Kangan Port and Bushehr Port tide gauge stations located in the Persian Gulf at the South of Iran. Moreover the accurate predictions of Mean Tide Level (MTL are provided for the entire of 2012 year in each tide gauge station by excluding the effects of astronomical arguments and nodal corrections due to their unreasonable destroying effects. The MTL's fluctuations derived from the predicted results during 2012 year and different phases of the Moon show a very good agreement together according to tide-generating forces theories.

  13. Study of statistically correcting model CMAQ-MOS for forecasting regional air quality

    Institute of Scientific and Technical Information of China (English)

    XU Jianming; HE Jinhai; YANG Yuanqin; WANG Jiahe; XU Xiangde; LIU Yu; DING Guoan; CHEN Huailiang; HU Jiangkai; ZHANG Jianchun; WU Hao; LI Weiliang

    2005-01-01

    Based on analysis of the air pollution observational data at 8 observation sites in Beijing including outer suburbs during the period from September 2004 to March 2005, this paper reveals synchronal and in-phase characteristics in the spatial and temporal variation of air pollutants on a city-proper scale at deferent sites; describes seasonal differences of the pollutant emission influence between the heating and non-heating periods, also significantly local differences of the pollutant emission influence between the urban district and outer suburbs, i.e. the spatial and temporal distribution of air pollutant is closely related with that of the pollutant emission intensity. This study shows that due to complexity of the spatial and temporal distribution of pollution emission sources, the new generation Community Multi-scale Air Quality (CMAQ) model developed by the EPA of USA produced forecasts, as other models did, with a systematic error of significantly lower than observations, albeit the model has better capability than previous models had in predicting the spatial distribution and variation tendency of multi-sort pollutants. The reason might be that the CMAQ adopts average amount of pollutant emission inventory, so that the model is difficult to objectively and finely describe the distribution and variation of pollution emission sources intensity on different spatial and temporal scales in the areas, in which the pollution is to be forecast. In order to correct the systematic prediction error resulting from the average pollutant emission inventory in CMAQ, this study proposes a new way of combining dynamics and statistics and establishes a statistically correcting model CMAQ-MOS for forecasts of regional air quality by utilizing the relationship of CMAQ outputs with corresponding observations, and tests the forecast capability. The investigation of experiments presents that CMAQ-MOS reduces the systematic errors of CMAQ because of the uncertainty of pollution

  14. On-line core monitoring system based on buckling corrected modified one group model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, Fernando S., E-mail: freire@eletronuclear.gov.br [ELETROBRAS Eletronuclear Gerencia de Combustivel Nuclear, Rio de Janeiro, RJ (Brazil)

    2011-07-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  15. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    Science.gov (United States)

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  16. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  17. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Directory of Open Access Journals (Sweden)

    Isabel M D Rosa

    Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  18. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  19. Lorentz factor - Beaming corrected energy/luminosity correlations and GRB central engine models

    Science.gov (United States)

    Yi, Shuang-Xi; Lei, Wei-Hua; Zhang, Bing; Dai, Zi-Gao; Wu, Xue-Feng; Liang, En-Wei

    2017-03-01

    We work on a GRB sample whose initial Lorentz factors (Γ0) are constrained by the afterglow onset method and the jet opening angles (θj) are determined by the jet break time. We confirm the Γ0-Eγ,iso correlation by Liang et al. (2010), and the Γ0-Lγ,iso correlation by Lü et al. (2012). Furthermore, we find correlations between Γ0 and the beaming corrected γ-ray energy (Eγ) and mean γ-ray luminosity (Lγ). By also including the kinetic energy of the afterglow, we find rough correlations (with larger scatter) between Γ0 and the total (γ-ray plus kinetic) energy and the total mean luminosity, both for isotropic values and beaming corrected values: these correlations allow us to test the data with GRB central engine models. Limiting our sample to the GRBs that likely have a black hole central engine, we compare the data with theoretical predictions of two types of jet launching mechanisms from BHs, i.e. the non-magnetized ν ν bar -annihilation mechanism, and the strongly magnetized Blandford-Znajek (BZ) mechanism. We find that the data are more consistent with the latter mechanism, and discuss the implications of our findings for GRB jet composition.

  20. Effective short-range Coulomb correction to model the aggregation behavior of ionic surfactants

    Science.gov (United States)

    Burgos-Mármol, J. Javier; Solans, Conxita; Patti, Alessandro

    2016-06-01

    We present a short-range correction to the Coulomb potential to investigate the aggregation of amphiphilic molecules in aqueous solutions. The proposed modification allows to quantitatively reproduce the distribution of counterions above the critical micelle concentration (CMC) or, equivalently, the degree of ionization, α, of the micellar clusters. In particular, our theoretical framework has been applied to unveil the behavior of the cationic surfactant C24H49N2O2+ CH3SO4-, which offers a wide range of applications in the thriving and growing personal care market. A reliable and unambiguous estimation of α is essential to correctly understand many crucial features of the micellar solutions, such as their viscoelastic behavior and transport properties, in order to provide sound formulations for the above mentioned personal care solutions. We have validated our theory by performing extensive lattice Monte Carlo simulations, which show an excellent agreement with experimental observations. More specifically, our coarse-grained model is able to reproduce and predict the complex morphology of the micelles observed at equilibrium. Additionally, our simulation results disclose the existence of a transition from a monodisperse to a bidisperse size distribution of aggregates, unveiling the intriguing existence of a second CMC.

  1. Radiative corrections to the Triple Higgs Coupling in the Inert Higgs Doublet Model

    CERN Document Server

    Arhrib, Abdesslam; Falaki, Jaouad El; Jueid, Adil

    2015-01-01

    We investigate the implication of the recent discovery of a Higgs-like particle in the first phase of the LHC Run 1 on the Inert Higgs Doublet Model (IHDM). The determination of the Higgs couplings to SM particles and its intrinsic properties will get improved during the new LHC Run 2 starting this year. The new LHC Run 2 would also shade some light on the triple Higgs coupling. Such measurement is very important in order to establish the details of the electroweak symmetry breaking mechanism. Given the importance of the Higgs couplings both at the LHC and $e^+e^-$ Linear Collider machines, accurate theoretical predictions are required. We study the radiative corrections to the triple Higgs coupling $hhh$ and to $hZZ$, $hWW$ couplings in the context of the IHDM. By combining several theoretical and experimental constraints on parameter space, we show that extra particles might modify the triple Higgs coupling near threshold regions. Finally, we discuss the effect of these corrections on the double Higgs produ...

  2. Radiative corrections to the triple Higgs coupling in the inert Higgs doublet model

    Science.gov (United States)

    Arhrib, Abdesslam; Benbrik, Rachid; El Falaki, Jaouad; Jueid, Adil

    2015-12-01

    We investigate the implication of the recent discovery of a Higgs-like particle in the first phase of the LHC Run 1 on the Inert Higgs Doublet Model (IHDM). The determination of the Higgs couplings to SM particles and its intrinsic properties will get improved during the new LHC Run 2 starting this year. The new LHC Run 2 would also shade some light on the triple Higgs coupling. Such measurement is very important in order to establish the details of the electroweak symmetry breaking mechanism. Given the importance of the Higgs couplings both at the LHC and e + e - Linear Collider machines, accurate theoretical predictions are required. We study the radiative corrections to the triple Higgs coupling hhh and to hZZ, hW W couplings in the context of the IHDM. By combining several theoretical and experimental constraints on parameter space, we show that extra particles might modify the triple Higgs coupling near threshold regions. Finally, we discuss the effect of these corrections on the double Higgs production signal at the e + e - LC and show that they can be rather important.

  3. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  4. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  6. Establishment of a new tropospheric delay correction model over China area

    Science.gov (United States)

    Song, Shuli; Zhu, Wenyao; Chen, Qinming; Liou, Yueian

    2011-12-01

    The tropospheric delay is one of the main error sources for radio navigation technologies and other ground- or space-based earth observation systems. In this paper, the spatial and temporal variations of the zenith tropospheric delay (ZTD), especially their dependence on altitude over China region, are analyzed using ECMWF (European Centre for Medium-Range Weather Forecast) pressure-level atmospheric data in 2004 and the ZTD series in 1999-2007 measured at 28 GPS stations from the Crustal Movement Observation Network of China (CMONC). A new tropospheric delay correction model (SHAO) is derived and a regional realization of this model for China region named SHAO-C is established. In SHAO-C model, ZTD is modeled directly by a cosine function together with an initial value and an amplitude at a reference height in each grid, and the variation of ZTD along altitude is fitted with a second-order polynomial. The coefficients of SHAO-C are generated using the meteorology data in China area and given at two degree latitude and longitude interval, featuring regional characteristics in order to facilitate a wide range of navigation and other surveying applications in and around China. Compared with the EGNOS (European Geostationary Navigation Overlay Service) model, which has been used globally and recommended by the European Union Wide Area Augmentation System, the ZTD prediction (in form of spatial and temporal projection) accuracy of the SHAO-C model is significantly improved over China region, especially at stations of higher altitudes. The reasons for the improvement are: (1) the reference altitude of SHAO-C parameters are given at the average height of each grid, and (2) more detailed description of complicated terrain variations in China is incorporated in the model. Therefore, the accumulated error at higher altitude can be reduced considerably. In contrast, the ZTD has to be calculated from the mean sea level with EGNOS and other models. Compared with the direct

  7. NNLO QCD corrections to the Drell-Yan cross section in models of TeV-scale gravity

    Science.gov (United States)

    Ahmed, Taushif; Banerjee, Pulak; Dhani, Prasanna K.; Kumar, M. C.; Mathews, Prakash; Rana, Narayan; Ravindran, V.

    2017-01-01

    The first results on the complete next-to-next-to-leading order (NNLO) Quantum Chromodynamic (QCD) corrections to the production of di-leptons at hadron colliders in large extra dimension models with spin-2 particles are reported in this article. In particular, we have computed these corrections to the invariant mass distribution of the di-leptons taking into account all the partonic sub-processes that contribute at NNLO. In these models, spin-2 particles couple through the energy-momentum tensor of the Standard Model with the universal coupling strength. The tensorial nature of the interaction and the presence of both quark annihilation and gluon fusion channels at the Born level make it challenging computationally and interesting phenomenologically. We have demonstrated numerically the importance of our results at Large Hadron Collider energies. The two-loop corrections contribute an additional 10% to the total cross section. We find that the QCD corrections are not only large but also important to make the predictions stable under renormalisation and factorisation scale variations, providing an opportunity to stringently constrain the parameters of the models with a spin-2 particle.

  8. NNLO QCD corrections to the Drell-Yan cross section in models of TeV-scale gravity

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Taushif; Banerjee, Pulak; Dhani, Prasanna K.; Rana, Narayan [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India); Homi Bhabha National Institute, Mumbai (India); Kumar, M.C. [Indian Institute of Technology Guwahati, Department of Physics, Guwahati (India); Mathews, Prakash [Saha Institute of Nuclear Physics, Kolkata, West Bengal (India); Ravindran, V. [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India)

    2017-01-15

    The first results on the complete next-to-next-to-leading order (NNLO) Quantum Chromodynamic (QCD) corrections to the production of di-leptons at hadron colliders in large extra dimension models with spin-2 particles are reported in this article. In particular, we have computed these corrections to the invariant mass distribution of the di-leptons taking into account all the partonic sub-processes that contribute at NNLO. In these models, spin-2 particles couple through the energy-momentum tensor of the Standard Model with the universal coupling strength. The tensorial nature of the interaction and the presence of both quark annihilation and gluon fusion channels at the Born level make it challenging computationally and interesting phenomenologically. We have demonstrated numerically the importance of our results at Large Hadron Collider energies. The two-loop corrections contribute an additional 10% to the total cross section. We find that the QCD corrections are not only large but also important to make the predictions stable under renormalisation and factorisation scale variations, providing an opportunity to stringently constrain the parameters of the models with a spin-2 particle. (orig.)

  9. Using a Teaching Model To Correct Known Misconceptions in Electrochemistry.

    Science.gov (United States)

    Huddle, Penelope Ann; White, Margaret Dawn; Rogers, Fiona

    2000-01-01

    Describes a concrete teaching model designed to eliminate students' misconceptions about current flow in electrochemistry. The model uses a semi-permeable membrane rather than a salt bridge to complete the circuit and demonstrate the maintenance of cell neutrality. Concludes that use of the model led to improvement in students' understanding at…

  10. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  11. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  12. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  13. Effects of Degree of Surgical Correction for Flatfoot Deformity in Patient-Specific Computational Models.

    Science.gov (United States)

    Spratley, E M; Matheis, E A; Hayes, C W; Adelaar, R S; Wayne, J S

    2015-08-01

    A cohort of adult acquired flatfoot deformity rigid-body models was developed to investigate the effects of isolated tendon transfer with successive levels of medializing calcaneal osteotomy (MCO). Following IRB approval, six diagnosed flatfoot sufferers were subjected to magnetic resonance imaging (MRI) and their scans used to derive patient-specific models. Single-leg stance was modeled, constrained solely through physiologic joint contact, passive soft-tissue tension, extrinsic muscle force, body weight, and without assumptions of idealized mechanical joints. Surgical effect was quantified using simulated mediolateral (ML) and anteroposterior (AP) X-rays, pedobarography, soft-tissue strains, and joint contact force. Radiographic changes varied across states with the largest average improvements for the tendon transfer (TT) + 10 mm MCO state evidenced through ML and AP talo-1st metatarsal angles. Interestingly, 12 of 14 measures showed increased deformity following TT-only, though all increases disappeared with inclusion of MCO. Plantar force distributions showed medial forefoot offloading concomitant with increases laterally such that the most corrected state had 9.0% greater lateral load. Predicted alterations in spring, deltoid, and plantar fascia soft-tissue strain agreed with prior cadaveric and computational works suggesting decreased strain medially with successive surgical repair. Finally, joint contact force demonstrated consistent medial offloading concomitant with variable increases laterally. Rigid-body modeling thus offers novel advantages for the investigation of foot/ankle biomechanics not easily measured in vivo.

  14. Update of the corrective model for Jason-1 DORIS data in relation to the South Atlantic Anomaly and a corrective model for SPOT-5

    Science.gov (United States)

    Capdeville, Hugues; Štěpánek, Petr; Hecker, Louis; Lemoine, Jean-Michel

    2016-12-01

    After recalling the principle of the Jason-1 data corrective model in relation to the South Atlantic Anomaly (SAA) developed by Lemoine and Capdeville (2006), we present a model update which takes into account the orbit changes and the recent DORIS data. We propose also here a method to the International DORIS Service (IDS) Analysis Centers (ACs) in their contribution to the ITRF2014 for adding DORIS Jason-1 data into their solutions. When the Jason-1 satellite is added to the multi-satellite solution (orbit of inclination of 66° complements the polar-orbiting satellites), the stability of the geocenter Z-translation is improved (standard deviation of 11.5 mm against 16.5 mm). In a second part we take advantage of a high-energy particles dosimeter (CARMEN) on-board Jason-2 to improve the corrective model of Jason-1. We completed a correlation study showing that the CARMEN >87 MeV integrated proton flux map averaged over the period 2009-2011 is the energy band of the CARMEN maps which are the most coherent with the one obtained from Jason-1 DORIS measurements. The model based on the Jason-1 map and the one based on the CARMEN map are then compared in terms of orbit determination and station position estimation. We derive and validate a SPOT-5 data corrective model. We determine the SAA grid at the altitude of SPOT-5 from the frequency time derivative of the on-board frequency offsets and estimated the model parameters. We demonstrate the impact of the SPOT-5 data corrective model on the Precise Orbit Determination and the station position estimation from the weekly solutions, based on two individual Analysis Centers solutions, GOP (Geodetic Observatory Pecny) and GRG (Groupe de Recherche de Géodésie Spatiale). The SPOT-5 data corrective model significantly improves the Precise Orbit Determination (reduction of 1.4% in 2013 of RMS of the fit, reduction of 25% in normal direction of arc overlap RMS) and the overall statistics of the station position estimation

  15. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  16. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  17. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...... of producing bias field correction results comparable to those of N3 in less computation time....

  18. Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    Science.gov (United States)

    Zimmerman, Tammy M.

    2008-01-01

    The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the

  19. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  20. Factors Influencing the Completion of the GED in a Federal Correctional Setting a Multiple Regression Correlation-Predictive Study

    Science.gov (United States)

    Akers, Kimberly

    2013-01-01

    Correctional education's primary goal is to reduce recidivism and increase employment among ex-offenders. The Bureau of Prison's practical goal in its mandatory GED program is to maximize the number of inmates obtaining the GED in a given time period. The purpose of this research is to model the number of instructional hours an inmate requires to…

  1. Technical Note: Bias correcting climate model simulated daily temperature extremes with quantile mapping

    Directory of Open Access Journals (Sweden)

    B. Thrasher

    2012-09-01

    Full Text Available When applying a quantile mapping-based bias correction to daily temperature extremes simulated by a global climate model (GCM, the transformed values of maximum and minimum temperatures are changed, and the diurnal temperature range (DTR can become physically unrealistic. While causes are not thoroughly explored, there is a strong relationship between GCM biases in snow albedo feedback during snowmelt and bias correction resulting in unrealistic DTR values. We propose a technique to bias correct DTR, based on comparing observations and GCM historic simulations, and combine that with either bias correcting daily maximum temperatures and calculating daily minimum temperatures or vice versa. By basing the bias correction on a base period of 1961–1980 and validating it during a test period of 1981–1999, we show that bias correcting DTR and maximum daily temperature can produce more accurate estimations of daily temperature extremes while avoiding the pathological cases of unrealistic DTR values.

  2. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  3. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  4. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  5. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  6. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  7. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  8. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  9. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  10. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  11. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  12. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  13. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  14. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  15. Forecasting inter-urban transport demand for a logistics company: A combined grey–periodic extension model with remnant correction

    Directory of Open Access Journals (Sweden)

    Donghui Wang

    2015-12-01

    Full Text Available Accurately predicting short-term transport demand for an individual logistics company involved in a competitive market is critical to make short-term operation decisions. This article proposes a combined grey–periodic extension model with remnant correction to forecast the short-term inter-urban transport demand of a logistics company involved in a nationwide competitive market, showing changes in trend and seasonal fluctuations with irregular periods different to the macroeconomic cycle. A basic grey–periodic extension model of an additive pattern, namely, the main combination model, is first constructed to fit the changing trends and the featured seasonal fluctuation periods. In order to improve prediction accuracy and model adaptability, the grey model is repeatedly modelled to fit the remnant tail time series of the main combination model until prediction accuracy is satisfied. The modelling approach is applied to a logistics company engaged in a nationwide less-than-truckload road transportation business in China. The results demonstrate that the proposed modelling approach produces good forecasting results and goodness of fit, also showing good model adaptability to the analysed object in a changing macro environment. This fact makes this modelling approach an option to analyse the short-term transportation demand of an individual logistics company.

  16. Multiple-relaxation-time model for the correct thermohydrodynamic equations.

    Science.gov (United States)

    Zheng, Lin; Shi, Baochang; Guo, Zhaoli

    2008-08-01

    A coupling lattice Boltzmann equation (LBE) model with multiple relaxation times is proposed for thermal flows with viscous heat dissipation and compression work. In this model the fixed Prandtl number and the viscous dissipation problems in the energy equation, which exist in most of the LBE models, are successfully overcome. The model is validated by simulating the two-dimensional Couette flow, thermal Poiseuille flow, and the natural convection flow in a square cavity. It is found that the numerical results agree well with the analytical solutions and/or other numerical results.

  17. Prior knowledge is more predictive of error correction than subjective confidence.

    Science.gov (United States)

    Sitzman, Danielle M; Rhodes, Matthew G; Tauber, Sarah K

    2014-01-01

    Previous research has demonstrated that, when given feedback, participants are more likely to correct confidently-held errors, as compared with errors held with lower levels of confidence, a finding termed the hypercorrection effect. Accounts of hypercorrection suggest that confidence modifies attention to feedback; alternatively, hypercorrection may reflect prior domain knowledge, with confidence ratings simply correlated with this prior knowledge. In the present experiments, we attempted to adjudicate among these explanations of the hypercorrection effect. In Experiments 1a and 1b, participants answered general knowledge questions, rated their confidence, and received feedback either immediately after rating their confidence or after a delay of several minutes. Although memory for confidence judgments should have been poorer at a delay, the hypercorrection effect was equivalent for both feedback timings. Experiment 2 showed that hypercorrection remained unchanged even when the delay to feedback was increased. In addition, measures of recall for prior confidence judgments showed that memory for confidence was indeed poorer after a delay. Experiment 3 directly compared estimates of domain knowledge with confidence ratings, showing that such prior knowledge was related to error correction, whereas the unique role of confidence was small. Overall, our results suggest that prior knowledge likely plays a primary role in error correction, while confidence may play a small role or merely serve as a proxy for prior knowledge.

  18. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  19. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  20. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  1. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  2. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  3. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    G R Bouroun; E Esfandyari

    2016-01-01

    A seminumerical solution to the valon model at next-to-leading order (NLO) in the Laguerre polynomials is presented. We used the valon model to generate the structure of proton with respect to the Laguerre polynomials method. The results are compared with H1 data and other parametrizations.

  4. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  5. Tests for, origins of, and corrections to non-Gaussian statistics. The dipole-flip model.

    Science.gov (United States)

    Schile, Addison J; Thompson, Ward H

    2017-04-21

    Linear response approximations are central to our understanding and simulations of nonequilibrium statistical mechanics. Despite the success of these approaches in predicting nonequilibrium dynamics, open questions remain. Laird and Thompson [J. Chem. Phys. 126, 211104 (2007)] previously formalized, in the context of solvation dynamics, the connection between the static linear-response approximation and the assumption of Gaussian statistics. The Gaussian statistics perspective is useful in understanding why linear response approximations are still accurate for perturbations much larger than thermal energies. In this paper, we use this approach to address three outstanding issues in the context of the "dipole-flip" model, which is known to exhibit nonlinear response. First, we demonstrate how non-Gaussian statistics can be predicted from purely equilibrium molecular dynamics (MD) simulations (i.e., without resort to a full nonequilibrium MD as is the current practice). Second, we show that the Gaussian statistics approximation may also be used to identify the physical origins of nonlinear response residing in a small number of coordinates. Third, we explore an approach for correcting the Gaussian statistics approximation for nonlinear response effects using the same equilibrium simulation. The results are discussed in the context of several other examples of nonlinear responses throughout the literature.

  6. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    Science.gov (United States)

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  7. Observation-based correction of dynamical models using thermostats

    Science.gov (United States)

    Frank, Jason; Leimkuhler, Benedict

    2017-01-01

    Models used in simulation may give accurate short-term trajectories but distort long-term (statistical) properties. In this work, we augment a given approximate model with a control law (a ‘thermostat’) that gently perturbs the dynamical system to target a thermodynamic state consistent with a set of prescribed (possibly evolving) observations. As proof of concept, we provide an example involving a point vortex fluid model on the sphere, for which we show convergence of equilibrium quantities (in the stationary case) and the ability of the thermostat to dynamically track a transient state. PMID:28265197

  8. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  9. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  11. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  13. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  14. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  15. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  16. A Geometrical-based Vertical Gain Correction for Signal Strength Prediction of Downtilted Base Station Antennas in Urban Areas

    DEFF Research Database (Denmark)

    Rodriguez, Ignacio; Nguyen, Huan Cong; Sørensen, Troels Bundgaard

    2012-01-01

    , with electrical antenna downtilt in the range from 0 to 10 degrees, as well as predictions based on ray-tracing and 3D building databases covering the measurement area. Although the calibrated ray-tracing predictions are highly accurate compared with the measured data, the combined LOS/NLOS COST-WI model...

  17. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  18. [Application of five atmospheric correction models for Landsat TM data in vegetation remote sensing].

    Science.gov (United States)

    Song, Wei-wei; Guan, Dong-sheng

    2008-04-01

    Based on the Landsat TM image of northeast Guangzhou City and north Huizhou City on July 18, 2005, and compared with apparent reflectance model, five atmospheric correction models including four dark object subtraction models and 6S model were evaluated from the aspects of vegetation reflectance, surface reflectance, and normalized difference vegetation index (NDVI). The results showed that the dark object subtraction model DOS4 produced the highest accurate vegetation reflectance, and had the largest information loads for surface reflectance and NDVI, being the best for the atmospheric correction in the study areas. It was necessary to analyze and to compare different models to find out an appropriate model for atmospheric correction in the study of other areas.

  19. Bias correction of temperature and precipitation data for regional climate model application to the Rhine basin

    Directory of Open Access Journals (Sweden)

    W. Terink

    2009-08-01

    Full Text Available In many climate impact studies hydrological models are forced with meteorological forcing data without an attempt to assess the quality of these forcing data. The objective of this study is to compare downscaled ERA15 (ECMWF-reanalysis data precipitation and temperature with observed precipitation and temperature and apply a bias correction to these forcing variables. The bias-corrected precipitation and temperature data will be used in another study as input for the Variable Infiltration Capacity (VIC model. Observations were available for 134 sub-basins throughout the Rhine basin at a temporal resolution of one day from the International Commission for the Hydrology of the Rhine basin (CHR. Precipitation is corrected by fitting the mean and coefficient of variation (CV of the observations. Temperature is corrected by fitting the mean and standard deviation of the observations. It seems that the uncorrected ERA15 is too warm and too wet for most of the Rhine basin. The bias correction leads to satisfactory results, precipitation and temperature differences decreased significantly. Corrections were largest during summer for both precipitation and temperature, and for September and October for precipitation only. Besides the statistics the correction method was intended to correct for, it is also found to improve the correlations for the fraction of wet days and lag-1 autocorrelations between ERA15 and the observations.

  20. Correction factor to account for dispersion in sharp-interface models of terrestrial freshwater lenses and active seawater intrusion

    Science.gov (United States)

    Werner, Adrian D.

    2017-04-01

    In this paper, a recent analytical solution that describes the steady-state extent of freshwater lenses adjacent to gaining rivers in saline aquifers is improved by applying an empirical correction for dispersive effects. Coastal aquifers experiencing active seawater intrusion (i.e., seawater is flowing inland) are presented as an analogous situation to the terrestrial freshwater lens problem, although the inland boundary in the coastal aquifer situation must represent both a source of freshwater and an outlet of saline groundwater. This condition corresponds to the freshwater river in the terrestrial case. The empirical correction developed in this research applies to situations of flowing saltwater and static freshwater lenses, although freshwater recirculation within the lens is a prominent consequence of dispersive effects, just as seawater recirculates within the stable wedges of coastal aquifers. The correction is a modification of a previous dispersive correction for Ghyben-Herzberg approximations of seawater intrusion (i.e., stable seawater wedges). Comparison between the sharp interface from the modified analytical solution and the 50% saltwater concentration from numerical modelling, using a range of parameter combinations, demonstrates the applicability of both the original analytical solution and its corrected form. The dispersive correction allows for a prediction of the depth to the middle of the mixing zone within about 0.3 m of numerically derived values, at least on average for the cases considered here. It is demonstrated that the uncorrected form of the analytical solution should be used to calculate saltwater flow rates, which closely match those obtained through numerical simulation. Thus, a combination of the unmodified and corrected analytical solutions should be utilized to explore both the saltwater fluxes and lens extent, depending on the dispersiveness of the problem. The new method developed in this paper is simple to apply and offers a

  1. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  2. Standard Model-like corrections to Dilatonic Dynamics

    DEFF Research Database (Denmark)

    Antipin, Oleg; Krog, Jens; Mølgaard, Esben

    2013-01-01

    We examine the effects of standard model-like interactions on the near-conformal dynamics of a theory featuring a dilatonic state identified with the standard model-like Higgs. As template for near-conformal dynamics we use a gauge theory with fermionic matter and elementary mesons possessing...... the same non-abelian global symmetries as a technicolor-like theory with matter in a complex representation of the gauge group. We then embed the electroweak gauge group within the global flavor structure and add also ordinary quark-like states to mimic the effects of the top. We find that the standard...

  3. Resibufogenin corrects hypertension in a rat model of human preeclampsia.

    Science.gov (United States)

    Vu, Hop; Ianosi-Irimie, Monica; Danchuk, Svitlana; Rabon, Edd; Nogawa, Toshihiko; Kamano, Yoshiaki; Pettit, G Robert; Wiese, Thomas; Puschett, Jules B

    2006-02-01

    The study of the pathogenesis of preeclampsia has been hampered by a relative dearth of animal models. We developed a rat model of preeclampsia in which the excretion of a circulating inhibitor of Na/K ATPase, marinobufagenin (MBG), is elevated. These animals develop hypertension, proteinuria, and intrauterine growth restriction. The administration of a congener of MBG, resibufogenin (RBG), reduces blood pressure to normal in these animals, as is the case when given to pregnant animals rendered hypertensive by the administration of MBG. Studies of Na/K ATPase inhibition by MBG and RBG reveal that these agents are equally effective as inhibitors of the enzyme.

  4. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  5. Investigation of turbulence models with compressibility corrections for hypersonic boundary flows

    Directory of Open Access Journals (Sweden)

    Han Tang

    2015-12-01

    Full Text Available The applications of pressure work, pressure-dilatation, and dilatation-dissipation (Sarkar, Zeman, and Wilcox models to hypersonic boundary flows are investigated. The flat plate boundary layer flows of Mach number 5–11 and shock wave/boundary layer interactions of compression corners are simulated numerically. For the flat plate boundary layer flows, original turbulence models overestimate the heat flux with Mach number high up to 10, and compressibility corrections applied to turbulence models lead to a decrease in friction coefficients and heating rates. The pressure work and pressure-dilatation models yield the better results. Among the three dilatation-dissipation models, Sarkar and Wilcox corrections present larger deviations from the experiment measurement, while Zeman correction can achieve acceptable results. For hypersonic compression corner flows, due to the evident increase of turbulence Mach number in separation zone, compressibility corrections make the separation areas larger, thus cannot improve the accuracy of calculated results. It is unreasonable that compressibility corrections take effect in separation zone. Density-corrected model by Catris and Aupoix is suitable for shock wave/boundary layer interaction flows which can improve the simulation accuracy of the peak heating and have a little influence on separation zone.

  6. COGNITIVE MODELS OF PREDICTION THE DEVELOPMENT OF A DIVERSIFIED CORPORATION

    Directory of Open Access Journals (Sweden)

    Baranovskaya T. P.

    2016-10-01

    Full Text Available The application of classical forecasting methods applied to a diversified corporation faces some certain difficulties, due to its economic nature. Unlike other businesses, diversified corporations are characterized by multidimensional arrays of data with a high degree of distortion and fragmentation of information due to the cumulative effect of the incompleteness and distortion of accounting information from the enterprises in it. Under these conditions, the applied methods and tools must have high resolution and work effectively with large databases with incomplete information, ensure the correct common comparable quantitative processing of the heterogeneous nature of the factors measured in different units. It is therefore necessary to select or develop some methods that can work with complex poorly formalized tasks. This fact substantiates the relevance of the problem of developing models, methods and tools for solving the problem of forecasting the development of diversified corporations. This is the subject of this work, which makes it relevant. The work aims to: 1 analyze the forecasting methods to justify the choice of system-cognitive analysis as one of the effective methods for the prediction of semi-structured tasks; 2 to adapt and develop the method of systemic-cognitive analysis for forecasting of dynamics of development of the corporation subject to the scenario approach; 3 to develop predictive model scenarios of changes in basic economic indicators of development of the corporation and to assess their credibility; 4 determine the analytical form of the dependence between past and future scenarios of various economic indicators; 5 develop analytical models weighing predictable scenarios, taking into account all prediction results with positive levels of similarity, to increase the level of reliability of forecasts; 6 to develop a calculation procedure to assess the strength of influence on the corporation (sensitivity of its

  7. Interacting Entropy-Corrected Holographic Scalar Field Models in Non-Flat Universe

    Institute of Scientific and Technical Information of China (English)

    A. Khodam-Mohammadi; M. Malekjani

    2011-01-01

    In this work we establish a correspondence between the tachyon, k-essence and dilaton scalar field models with the interacting entropy-corrected holographic dark (ECHD) model in non-flat FRW universe.The reconstruction of potentials and dynamics of these scalar fields according to the evolutionary behavior of the interacting ECHDE model are done.It has been shown that the phantom divide can not be crossed in ECHDE tachyon model while it is achieved for ECHDE k-essence and ECHDE dilaton scenarios.At last we calculate the limiting case of interacting ECHDE model,without entropy-correction.

  8. 基于预测误差校正的支持向量机短期风速预测%Short-term Wind Speed prediction with Support Vector Machine Based on Predict Error Correction

    Institute of Scientific and Technical Information of China (English)

    周松林; 茆美琴; 苏建徽

    2012-01-01

    An accurate predict of wind speed is important for power department to regulate dispatching plan in time.A support vector machine(SVM) model was established for forecasting wind speed,at the same time,a new idea using predict error correction methods to improving the prediction accuracy was proposed.SVM model was established for a preliminary prediction of wind speed,and then the error prediction model based on wavelet-support vector machine was set up by use of samples which were separately constructed from training error and testing error.Finally,the correction of preliminary prediction values was carried out.Simulation results show that the proposed method can significantly improve the prediction accuracy,and the method is simple,clear and steady,which can be extended to long-term wind speed prediction,load prediction,and other prediction field.%对风电场风速进行较为准确的预测,对于电力部门及时调整调度计划至关重要。建立了支持向量机风速预测模型,并提出了结合预测误差校正来提高预测精度的新思路。先建立SVM模型初步预测风速,再将得到的训练误差和测试误差分别构建样本,建立基于小波-支持向量机的误差预测模型进行误差预测,最后用预测误差对风速初步预测值进行校正。仿真结果表明所提方法能明显改善预测精度,而且方法简洁明了,具有很好的稳健性,能够推广到长期风速预测、负荷预测及其它预测领域。

  9. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  10. A hierarchical Bayes error correction model to explain dynamic effects

    NARCIS (Netherlands)

    D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2004-01-01

    textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio

  11. Data correction for seven activity trackers based on regression models.

    Science.gov (United States)

    Andalibi, Vafa; Honko, Harri; Christophe, Francois; Viik, Jari

    2015-08-01

    Using an activity tracker for measuring activity-related parameters, e.g. steps and energy expenditure (EE), can be very helpful in assisting a person's fitness improvement. Unlike the measuring of number of steps, an accurate EE estimation requires additional personal information as well as accurate velocity of movement, which is hard to achieve due to inaccuracy of sensors. In this paper, we have evaluated regression-based models to improve the precision for both steps and EE estimation. For this purpose, data of seven activity trackers and two reference devices was collected from 20 young adult volunteers wearing all devices at once in three different tests, namely 60-minute office work, 6-hour overall activity and 60-minute walking. Reference data is used to create regression models for each device and relative percentage errors of adjusted values are then statistically compared to that of original values. The effectiveness of regression models are determined based on the result of a statistical test. During a walking period, EE measurement was improved in all devices. The step measurement was also improved in five of them. The results show that improvement of EE estimation is possible only with low-cost implementation of fitting model over the collected data e.g. in the app or in corresponding service back-end.

  12. Thermodynamics of O(N) sigma models : 1/N corrections

    NARCIS (Netherlands)

    Andersen, JO; Boer, D; Warringa, HJ

    2004-01-01

    The thermodynamics of the O(N) linear and nonlinear sigma models in 3+1 dimensions is studied. We calculate the pressure to next-to-leading order in the 1/N expansion and show that at this order, temperature-independent renormalization is only possible at the minimum of the effective potential. The

  13. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  14. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, C.; Plant, N.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.

  15. The prognostic value of pre-operative predicted forced vital capacity in corrective spinal surgery for Duchenne's muscular dystrophy.

    Science.gov (United States)

    Harper, C M; Ambler, G; Edge, G

    2004-12-01

    The majority of patients with Duchenne's muscular dystrophy require corrective spinal surgery for scoliosis to maintain seated balance and to slow the progression of respiratory compromise, thereby facilitating nursing and enhancing their quality of life. Traditionally patients with a pre-operative forced vital capacity (PFVC) of 30% or below predicted have been denied this surgery as it was thought that the incidence of postoperative complications was unacceptably high. We present data collected prospectively from 45 consecutive operations undertaken in our unit. These cases indicate that there is no clinically significant difference in operative and postoperative outcomes between patients with PFVC > 30% and vital.

  16. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  17. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  18. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  19. Correcting North Atlantic sea surface salinity biases in the Kiel Climate Model: influences on ocean circulation and Atlantic Multidecadal Variability

    Science.gov (United States)

    Park, T.; Park, W.; Latif, M.

    2016-10-01

    A long-standing problem in climate models is the large sea surface salinity (SSS) biases in the North Atlantic. In this study, we describe the influences of correcting these SSS biases on the circulation of the North Atlantic as well as on North Atlantic sector mean climate and decadal to multidecadal variability. We performed integrations of the Kiel Climate Model (KCM) with and without applying a freshwater flux correction over the North Atlantic. The quality of simulating the mean circulation of the North Atlantic Ocean, North Atlantic sector mean climate and decadal variability is greatly enhanced in the freshwater flux-corrected integration which, by definition, depicts relatively small North Atlantic SSS biases. In particular, a large reduction in the North Atlantic cold sea surface temperature bias is observed and a more realistic Atlantic Multidecadal Variability simulated. Improvements relative to the non-flux corrected integration also comprise a more realistic representation of deep convection sites, sea ice, gyre circulation and Atlantic Meridional Overturning Circulation. The results suggest that simulations of North Atlantic sector mean climate and decadal variability could strongly benefit from alleviating sea surface salinity biases in the North Atlantic, which may enhance the skill of decadal predictions in that region.

  20. Impacts of Sea Surface Salinity Bias Correction on North Atlantic Ocean Circulation and Climate Variability in the Kiel Climate Model

    Science.gov (United States)

    Park, Taewook; Park, Wonsun; Latif, Mojib

    2016-04-01

    We investigated impacts of correcting North Atlantic sea surface salinity (SSS) biases on the ocean circulation of the North Atlantic and on North Atlantic sector mean climate and climate variability in the Kiel Climate Model (KCM). Bias reduction was achieved by applying a freshwater flux correction over the North Atlantic to the model. The quality of simulating the mean circulation of the North Atlantic Ocean, North Atlantic sector mean climate and decadal variability is greatly enhanced in the freshwater flux-corrected integration which, by definition, depicts relatively small North Atlantic SSS biases. In particular, a large reduction in the North Atlantic cold sea surface temperature (SST) bias is observed and a more realistic Atlantic Multidecadal Variability (AMV) simulated. Improvements relative to the non-flux corrected integration also comprise a more realistic representation of deep convection sites, sea ice, gyre circulation and Atlantic Meridional Overturning Circulation (AMOC). The results suggest that simulations of North Atlantic sector mean climate and decadal variability could strongly benefit from alleviating sea surface salinity biases in the North Atlantic, which may enhance the skill of decadal predictions in that region.

  1. Corrections to the Shapiro Equation used to Predict Sweating and Water Requirements

    Science.gov (United States)

    2008-01-01

    1990. 35. Tseng, Y-T., P. Durbin , and G-H. Tzeng. Using fuzzy piecewise regression analysis to predict the non-linear time series of turbulent...using: facl = 1 + (0.2 • AD). (Eq. B-4) Breckenridge (1) defined the algebraic sum of the total (DRY) heat

  2. A MAXIMUM ENTROPY CHUNKING MODEL WITH N-FOLD TEMPLATE CORRECTION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This letter presents a new chunking method based on Maximum Entropy (ME) model with N-fold template correction model. First two types of machine learning models are described. Based on the analysis of the two models, then the chunking model which combines the profits of conditional probability model and rule based model is proposed. The selection of features and rule templates in the chunking model is discussed. Experimental results for the CoNLL-2000 corpus show that this approach achieves impressive accuracy in terms of the F-score: 92.93%. Compared with the ME model and ME Markov model, the new chunking model achieves better performance.

  3. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  4. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    Science.gov (United States)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  5. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    Science.gov (United States)

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  6. One-Loop Radiative Correction to the Triple Higgs Coupling in the Higgs Singlet Model

    CERN Document Server

    He, Shi-Ping

    2016-01-01

    Though the 125 GeV Higgs boson is consistent with the standard model (SM) prediction until now, the triple coupling can deviate from the SM value in the physics beyond the SM (BSM). In this paper, the radiative correction to the triple Higgs coupling is calculated in the minimal extension of the SM by adding a real gauge singlet scalar. In this model there are two scalars $h$ and $H$ and both of them are mixed states of the doublet and singlet. Provided that the mixing angle is set to be zero, $h$ is the pure left-over of the doublet and its behavior is the same as that of the SM except the triple $h$ couping. In this SM limit case, the effect of the singlet $H$ will decouple from the fermions and gauge bosons, and firstly shown up in the triple $h$ coupling. Our numerical results show that the deviation is sizable. For $\\lambda_{\\Phi{S}}=1$ (see text for the parameter definition), the deviation $\\delta_{hhh}^{(1)}$ can be $40\\%$. For $\\lambda_{\\Phi{S}}=1.5$, the $\\delta_{hhh}^{(1)}$ can reach $140\\%$. The si...

  7. Combined resist and etch modeling and correction for the 45-nm node

    Science.gov (United States)

    Drapeau, Martin; Beale, Dan

    2006-10-01

    Emerging resist and etch process technologies for the 45 nm node exhibit new types of non-optical proximity errors, thus placing new demands on OPC modeling tools. In a previous paper (SPIE Vol. 6283-75) we had experimentally demonstrated a full resist and etch model calibration and verified the stability of the model using 45nm node standard logic cells. The etch model used a novel non-linear etch modeling object in combination with conventional convolution Kernels. Building upon those results, this paper focuses on the correction of patterns. We demonstrate a two-stage optical/resist and etch correction using calibrated models, including the use of non-linear etch modeling objects. Optical/resist and etch models are built separately and used sequentially to correct a 45nm logic pattern. Critical areas of the pattern affected by etch are analyzed and used to verify the correction. Verification of the correction is obtained through comparison between the simulated contours with the design intent.

  8. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  9. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  10. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  11. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  12. Oblique corrections from less-Higgsless models in warped space

    CERN Document Server

    Hatanaka, Hisaki

    2015-01-01

    The Higgsless model in warped extra dimension is reexamined. Dirichlet boundary conditions on the TeV brane are replaced with Robin boundary conditions which are parameterized by a mass parameter $M$. We calculate the Peskin-Takeuchi precision parameters $S$, $T$ and $U$ at tree level. We find that to satisfy the constraints on the precision parameters at $99 \\%$ [$95 \\%$] confidence level (CL) the first Kaluza-Klein excited $Z$ boson, $Z'$, should be heavier than 5 TeV [8 TeV]. The Magnitude of $M$, which is infinitely large in the original model, should be smaller than 200 GeV (70 GeV) for the curvature of the warped space $R^{-1}=10^{16}$ GeV ($10^{8}$ GeV) at $95\\%$ CL. If the Robin boundary conditions are induced by the mass terms localized on the TeV brane, from the $99\\%$ [$95\\%$] bound we find that the brane mass interactions account for more than $97\\%$ [$99\\%$] of the masses of $Z$ and $W$ bosons. Such a brane mass term is naturally interpreted as a vacuum expectation value of the Higgs scalar field...

  13. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  14. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  16. ANALISIS INFLASI DI SUMATERA UTARA: SUATU MODEL ERROR CORRECTION (ECM

    Directory of Open Access Journals (Sweden)

    Hafsyah Aprilia

    2012-06-01

    Full Text Available The research was conducted to determine the effect of economic variables that can explain the change or variation in the rate of inflation in the Consumer Price Index (CPI as the dependent variable. The explanatory variables (independent were used as controls are SBI, the nominal interest rate spread (SBI and the value of the rupiah against the U.S. dollar. Based on these results, according to the specific purpose of the model equations II, suggested economic actors can use SBI interest rate spread as an indicator of variations in the CPI inflation rate at intervals of 8 and 12 months, with a note that the obtained level of explanation has not shown that the optimal value

  17. Predicting and understanding forest dynamics using a simple tractable model.

    Science.gov (United States)

    Purves, Drew W; Lichstein, Jeremy W; Strigul, Nikolay; Pacala, Stephen W

    2008-11-04

    The perfect-plasticity approximation (PPA) is an analytically tractable model of forest dynamics, defined in terms of parameters for individual trees, including allometry, growth, and mortality. We estimated these parameters for the eight most common species on each of four soil types in the US Lake states (Michigan, Wisconsin, and Minnesota) by using short-term (predictions to chronosequences of stand development. Predictions for the timing and magnitude of basal area dynamics and ecological succession on each soil were accurate, and predictions for the diameter distribution of 100-year-old stands were correct in form and slope. For a given species, the PPA provides analytical metrics for early-successional performance (H(20), height of a 20-year-old open-grown tree) and late-successional performance (Z*, equilibrium canopy height in monoculture). These metrics predicted which species were early or late successional on each soil type. Decomposing Z* showed that (i) succession is driven both by superior understory performance and superior canopy performance of late-successional species, and (ii) performance differences primarily reflect differences in mortality rather than growth. The predicted late-successional dominants matched chronosequences on xeromesic (Quercus rubra) and mesic (codominance by Acer rubrum and Acer saccharum) soil. On hydromesic and hydric soils, the literature reports that the current dominant species in old stands (Thuja occidentalis) is now failing to regenerate. Consistent with this, the PPA predicted that, on these soils, stands are now succeeding to dominance by other late-successional species (e.g., Fraxinus nigra, A. rubrum).

  18. A method of color correction of camera based on HSV model

    Science.gov (United States)

    Zhao, Rujin; Wang, Jin; Yu, Guobing; Zhong, Jie; Zhou, Wulin; Li, Yihao

    2014-09-01

    A novel color correction method of camera based on HSV (Hue, Saturation, and Value) model is proposed in this paper, which aims at the problem that spectrum response of camera differs from the CIE criterion, and that the image color of camera is aberrant. Firstly, the color of image is corrected based on HSV model to which image is transformed from RGB model. As a result, the color of image accords with the human vision for the coherence between HSV model and human vision; Secondly, the colors checker with 24 kinds of color under standard light source is used to compute correction coefficient matrix, which improves the spectrum response of camera and the CIE criterion. Furthermore, the colors checker with 24 kinds of color improves the applicability of the color correction coefficient matrix for different image. The experimental results show that the color difference between corrected color and color checker is lower based on proposed method, and the corrected color of image is consistent with the human eyes.

  19. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  20. Force-reflecting teleoperation of robots based on on-line correction of a virtual model

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; SONG Aiguo; LI Huijun

    2007-01-01

    Virtual reality is an effective method to eliminate the influence of time delay.However,it depends on the precision of the virtual model.In this paper,we introduce a method that corrects the virtual model on-line to establish a more precise model.The geometric errors of the virtual model were corrected on-line by overlapping the graphics over the images and also by syncretizing the position and force information from the remote.Then the sliding average least squares (SALS)method was adopted to determine the mass,damp,and stiffness of the remote environment and use this information to amend the dynamic model of the environment.Experimental results demonstrate that the on-line correction method we proposed can effectively reduce the impact caused by time delay,and improve the operational performance of the teleoperation system.

  1. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  2. QCD Corrections to K-Kbar Mixing in R-symmetric Supersymmetric Models

    CERN Document Server

    Blechman, Andrew E

    2008-01-01

    The leading-log QCD corrections to K-Kbar mixing in R-symmetric supersymmetric models are computed using effective field theory techniques. The spectrum topology where the gluino is significantly heavier than the squarks is motivated and focused on. It is found that, like in the MSSM, QCD corrections can tighten the kaon mass difference bound by roughly a factor of three. CP violation is also briefly considered, where QCD corrections can constrain phases to be as much as a factor of ten smaller than the uncorrected value.

  3. QCD Corrections to K-Kbar Mixing in R-symmetric Supersymmetric Models

    OpenAIRE

    Blechman, Andrew E.; Ng, Siew-Phang

    2008-01-01

    The leading-log QCD corrections to K-Kbar mixing in R-symmetric supersymmetric models are computed using effective field theory techniques. The spectrum topology where the gluino is significantly heavier than the squarks is motivated and focused on. It is found that, like in the MSSM, QCD corrections can tighten the kaon mass difference bound by roughly a factor of three. CP violation is also briefly considered, where QCD corrections can constrain phases to be as much as a factor of ten small...

  4. Finite Size Corrections to the Excitation Energy Transfer in a Massless Scalar Interaction Model

    CERN Document Server

    Maeda, N; Tobita, Y; Ishikawa, K

    2016-01-01

    We study the excitation energy transfer (EET) for a simple model in which a virtual massless scalar particle is exchanged between two molecules. If the time interval is finite, then the finite size effect generally appears in a transition amplitude through the regions where the wave nature of quanta remains. We calculated the transition amplitude for EET and obtained finite size corrections to the standard formula derived by using Fermi's golden rule. These corrections for the transition amplitude appear outside the resonance energy region. The estimation in a photosynthesis system indicates that the finite size correction could reduce the EET time considerably.

  5. Quantum gravity corrections to the standard model Higgs in Einstein and $R^2$ gravity

    CERN Document Server

    Abe, Yugo; Inami, Takeo

    2016-01-01

    We evaluate quantum gravity corrections to the standard model Higgs potential $V(\\phi)$ a la Coleman-Weinberg and examine the stability question of $V(\\phi)$ at scales of Planck mass $M_{\\rm Pl}$. We compute the gravity one-loop corrections by using the momentum cut-off in Einstein gravity. The gravity corrections affect the potential in a significant manner for the value of $\\Lambda= (1 - 3)M_{\\rm Pl}.$ In view of reducing the UV cut-off dependence we also make a similar study in the $R^2$ gravity.

  6. Corrections to scaling in random resistor networks and diluted continuous spin models near the percolation threshold.

    Science.gov (United States)

    Janssen, Hans-Karl; Stenull, Olaf

    2004-02-01

    We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).

  7. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  8. Long range correction for multi-site Lennard-Jones models and planar interfaces

    CERN Document Server

    Werth, Stephan; Horsch, Martin; Vrabec, Jadran; Hasse, Hans

    2015-01-01

    A slab based long range correction approach for multi-site Lennard-Jones models is presented for systems with a planar film geometry that is based on the work by Janecek, J. Phys. Chem. B 110: 6264 (2006). It is efficient because it relies on a center-of-mass cutoff scheme and scales in terms of numerics almost perfectly with the molecule number. For validation, a series of simulations with the two-center Lennard-Jones model fluid, carbon dioxide and cyclohexane is carried out. The results of the present approach, a site-based long range correction and simulations without any long range correction are compared with respect to the saturated liquid density and the surface tension. The present simulation results exhibit only a weak dependence on the cutoff radius, indicating a high accuracy of the implemented long range correction.

  9. Dose correction for the Michaelis-Menten approximation of the target-mediated drug disposition model.

    Science.gov (United States)

    Yan, Xiaoyu; Krzyzanski, Wojciech

    2012-04-01

    The Michaelis-Menten (M-M) approximation of the target-mediated drug disposition (TMDD) pharmacokinetic (PK) model was derived based on the rapid binding (RB) or quasi steady-state (QSS) assumptions that implied that the target and drug binding and dissociation were in equilibrium. However, the initial dose for an IV bolus injection for the M-M model did not account for a fraction bound to the target. We postulated a correction to an initial condition that was consistent with the assumptions underlying the M-M approximation. We determined that the difference between the injected dose and one that should be used for the initial condition is equal to the amount of drug bound to the target upon reaching the equilibrium. We also observed that the corrected initial condition made the internalization rate constant an identifiable parameter that was not for the original M-M model. Finally, we performed a simulation exercise to check if the correction will impact the model performance and the bias of the M-M parameter estimates. We used literature data to simulate plasma drug concentrations described by the RB/QSS TMDD model. The simulated data were refitted by both models. All the parameters estimated from the original M-M model were substantially biased. On the other hand, the corrected M-M is able to accurately estimate these parameters except for equilibrium constant K(m). Weighted sum of square residual and Akaike information criterion suggested a better performance of the corrected M-M model compared with the original M-M model. Further studies are necessary to determine the importance of this correction for the M-M model applications to analysis of TMDD driven PK data.

  10. Dose correction for the Michaelis–Menten approximation of the target-mediated drug disposition model

    Science.gov (United States)

    Yan, Xiaoyu

    2012-01-01

    The Michaelis–Menten (M–M) approximation of the target-mediated drug disposition (TMDD) pharmacokinetic (PK) model was derived based on the rapid binding (RB) or quasi steady-state (QSS) assumptions that implied that the target and drug binding and dissociation were in equilibrium. However, the initial dose for an IV bolus injection for the M–M model did not account for a fraction bound to the target. We postulated a correction to an initial condition that was consistent with the assumptions underlying the M–M approximation. We determined that the difference between the injected dose and one that should be used for the initial condition is equal to the amount of drug bound to the target upon reaching the equilibrium. We also observed that the corrected initial condition made the internalization rate constant an identifiable parameter that was not for the original M–M model. Finally, we performed a simulation exercise to check if the correction will impact the model performance and the bias of the M–M parameter estimates. We used literature data to simulate plasma drug concentrations described by the RB/QSS TMDD model. The simulated data were refitted by both models. All the parameters estimated from the original M–M model were substantially biased. On the other hand, the corrected M–M is able to accurately estimate these parameters except for equilibrium constant Km. Weighted sum of square residual and Akaike information criterion suggested a better performance of the corrected M–M model compared with the original M–M model. Further studies are necessary to determine the importance of this correction for the M–M model applications to analysis of TMDD driven PK data. PMID:22215144

  11. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  12. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  15. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  16. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  17. Retrospective study comparing model-based deformation correction to intraoperative magnetic resonance imaging for image-guided neurosurgery.

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F; Weis, Jared A; Clements, Logan W; Unadkat, Prashin; Thompson, Reid C; Golby, Alexandra J; Miga, Michael I

    2017-07-01

    Brain shift during tumor resection compromises the spatial validity of registered preoperative imaging data that is critical to image-guided procedures. One current clinical solution to mitigate the effects is to reimage using intraoperative magnetic resonance (iMR) imaging. Although iMR has demonstrated benefits in accounting for preoperative-to-intraoperative tissue changes, its cost and encumbrance have limited its widespread adoption. While iMR will likely continue to be employed for challenging cases, a cost-effective model-based brain shift compensation strategy is desirable as a complementary technology for standard resections. We performed a retrospective study of [Formula: see text] tumor resection cases, comparing iMR measurements with intraoperative brain shift compensation predicted by our model-based strategy, driven by sparse intraoperative cortical surface data. For quantitative assessment, homologous subsurface targets near the tumors were selected on preoperative MR and iMR images. Once rigidly registered, intraoperative shift measurements were determined and subsequently compared to model-predicted counterparts as estimated by the brain shift correction framework. When considering moderate and high shift ([Formula: see text], [Formula: see text] measurements per case), the alignment error due to brain shift reduced from [Formula: see text] to [Formula: see text], representing [Formula: see text] correction. These first steps toward validation are promising for model-based strategies.

  18. Twenty-four hour predictions of the solar wind speed peaks by the probability distribution function model

    Science.gov (United States)

    Bussy-Virat, C. D.; Ridley, A. J.

    2016-10-01

    Abrupt transitions from slow to fast solar wind represent a concern for the space weather forecasting community. They may cause geomagnetic storms that can eventually affect systems in orbit and on the ground. Therefore, the probability distribution function (PDF) model was improved to predict enhancements in the solar wind speed. New probability distribution functions allow for the prediction of the peak amplitude and the time to the peak while providing an interval of uncertainty on the prediction. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. This represents a considerable improvement upon the first version of the PDF model. A direct comparison with the Wang-Sheeley-Arge model shows that the PDF model is quite similar, except that it leads to fewer false positive predictions and misses fewer events, especially when the peak reaches very high speeds.

  19. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  20. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  1. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  2. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  3. Ground Motion Prediction for the Vicinity by Using the Microtremor Site-effect Correction

    Science.gov (United States)

    Lin, C. M.; Wen, K. L.; Kuo, C. H.

    2015-12-01

    This study develops a method analyzing the seismograms of a strong-motion station and the microtremor site effects (H/V ratios) around it to predict the ground motion of its vicinity. The Hsinchu Science Park (HSP) in Taiwan was chosen as our study site. The horizontal S-wave seismograms of the TCU017 strong-motion station, which locates at the center of the HSP, were convoluted by the difference of the microtremor H/V ratio between various sites to synthesize the seismograms of several strong-motion stations around the HSP. The comparisons between synthetic and observed seismograms show that this method of ground motion prediction for the vicinity is feasible for far-field earthquakes. However, the seismic source and attenuation effects make this method ineffectual for near-field earthquakes. Because the microtremor H/V ratios at about 200 sites, which are densely distributed in the HSP, were conducted, the seismic ground motion distributions of some historical earthquakes were synthesized by this study. The synthetic ground motion distributions ignore the seismic source and attenuation effects but still show notable variations in the HSP because of the seismic site effects.

  4. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  5. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  6. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  7. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  8. Dynamical corrections to the anomalous holographic soft-wall model: the pomeron and the odderon

    Energy Technology Data Exchange (ETDEWEB)

    Capossoli, Eduardo Folco [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Colegio Pedro II, Departamento de Fisica, Rio de Janeiro, RJ (Brazil); Li, Danning [Institute of Theoretical Physics, Chinese Academy of Science (ITP, CAS), Beijing (China); Boschi-Filho, Henrique [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ (Brazil)

    2016-06-15

    In this work we use the holographic soft-wall AdS/QCD model with anomalous dimension contributions coming from two different QCD beta functions to calculate the masses of higher spin glueball states for both even and odd spins and their Regge trajectories, related to the pomeron and the odderon, respectively. We further investigate this model taking into account dynamical corrections due to a dilaton potential consistent with the Einstein equations in five dimensions. The results found in this work for the Regge trajectories within the anomalous soft-wall model with dynamical corrections are consistent with those present in the literature. (orig.)

  9. Active vibration control with model correction on a flexible laboratory grid structure

    Science.gov (United States)

    Schamel, George C., II; Haftka, Raphael T.

    1991-01-01

    This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.

  10. Dynamical corrections to the anomalous holographic softwall model: the pomeron and the odderon

    CERN Document Server

    Capossoli, Eduardo Folco; Boschi-Filho, Henrique

    2016-01-01

    In this work we use the holographic softwall AdS/QCD model with anomalous dimension contributions coming from two different QCD beta functions to calculate the masses of higher spin glueball states for both even and odd spins and its respective Regge trajectories, related to the pomeron and the odderon, respectively. We further investigate this model taking into account dynamical corrections due to a dilaton potential consistent with Einstein equations in 5 dimensions. The results found in this work for the Regge trajectories within the anomalous softwall model with dynamical corrections are consistent with those presented in the literature.

  11. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  12. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  13. Reliability Analysis of a Composite Blade Structure Using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian

    2010-01-01

    This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...... in a probabilistic sense is model corrected so that it, close to the design point, represents the same structural behaviour as a realistic FE model. This approach leads to considerable improvement of computational efficiency over classical response surface methods, because the numerically “cheap” idealistic model...... is used as the response surface, while the time-consuming detailed model is called only a few times until the simplified model is calibrated to the detailed model....

  14. Bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations

    Science.gov (United States)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-11-01

    In this paper, we present a comparative study of bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations/models. In traditional bias correction schemes, the statistics of the simulated model outputs are adjusted to those of the observation data. However, the model output and the observation data are only one case (i.e., realization) out of many possibilities, rather than being sampled from the entire population of a certain distribution due to internal climate variability. This issue has not been considered in the bias correction schemes of the existing climate change studies. Here, three approaches are employed to explore this issue, with the intention of providing a practical tool for bias correction of daily rainfall for use in hydrologic models ((1) conventional method, (2) non-informative Bayesian method, and (3) informative Bayesian method using a Weather Generator (WG) data). The results show some plausible uncertainty ranges of precipitation after correcting for the bias of RCM precipitation. The informative Bayesian approach shows a narrower uncertainty range by approximately 25-45% than the non-informative Bayesian method after bias correction for the baseline period. This indicates that the prior distribution derived from WG may assist in reducing the uncertainty associated with parameters. The implications of our results are of great importance in hydrological impact assessments of climate change because they are related to actions for mitigation and adaptation to climate change. Since this is a proof of concept study that mainly illustrates the logic of the analysis for uncertainty-based bias correction, future research exploring the impacts of uncertainty on climate impact assessments and how to utilize uncertainty while planning mitigation and adaptation strategies is still needed.

  15. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  16. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effec

  17. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  18. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  19. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  20. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  1. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  2. Predictivity of models with spontaneously broken non-Abelian discrete flavor symmetries

    Science.gov (United States)

    Chen, Mu-Chun; Fallbacher, Maximilian; Omura, Yuji; Ratz, Michael; Staudt, Christian

    2013-08-01

    In a class of supersymmetric flavor models predictions are based on residual symmetries of some subsectors of the theory such as those of the charged leptons and neutrinos. However, the vacuum expectation values of the so-called flavon fields generally modify the Kähler potential of the setting, thus changing the predictions. We derive simple analytic formulae that allow us to understand the impact of these corrections on the predictions for the masses and mixing parameters. Furthermore, we discuss the effects on the vacuum alignment and on flavor changing neutral currents. Our results can also be applied to non-supersymmetric flavor models.

  3. Predictivity of models with spontaneously broken non-Abelian discrete flavor symmetries

    CERN Document Server

    Chen, Mu-Chun; Omura, Yuji; Ratz, Michael; Staudt, Christian

    2013-01-01

    In a class of supersymmetric flavor models predictions are based on residual symmetries of some subsectors of the theory such as those of the charged leptons and neutrinos. However, the vacuum expectation values of the so-called flavon fields generally modify the K\\"ahler potential of the setting, thus changing the predictions. We derive simple analytic formulae that allow us to understand the impact of these corrections on the predictions for the masses and mixing parameters. Furthermore, we discuss the effects on the vacuum alignment and on flavor changing neutral currents. Our results can also be applied to non--supersymmetric flavor models.

  4. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  5. The Simulation and Correction to the Brain Deformation Based on the Linear Elastic Model in IGS

    Institute of Scientific and Technical Information of China (English)

    MUXiao-lan; SONGZhi-jian

    2004-01-01

    The brain deformation is a vital factor affecting the precision of the IGS and it becomes a hotspot to simulate and correct the brain deformation recently.The research organizations, which firstly resolved the brain deformation with the physical models, have the Image Processing and Analysis department of Yale University, Biomedical Modeling Lab of Vanderbilt University and so on. The former uses the linear elastic model; the latter uses the consolidation model.

  6. Modeling and post-correction of pipeline analog-digital converters

    OpenAIRE

    Medawar, Samer

    2010-01-01

    Integral nonlinearity (INL) for pipelined analog-digital converters (ADCs) operating at radio frequency is measured and characterized. A parametric model for the INL of pipelined ADCs is proposed and the corresponding least-squares problem is formulated and solved. The estimated model parameters are used to design a post-correction block in order to compensate the pipeline ADC. The INL is modeled both with respect to the ADC output code k and the frequency stimuli, which is dynamic modeling. ...

  7. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  8. Prediction beyond the survey sample: correcting for survey effects on consumer decisions.

    NARCIS (Netherlands)

    C. Heij (Christiaan); Ph.H.B.F. Franses (Philip Hans)

    2006-01-01

    textabstractDirect extrapolation of survey results on purchase intentions may give a biased view on actual consumer behavior. This is because the purchase intentions of consumers may be affected by the survey itself. On the positive side, such effects can be incorporated in econometric models to get

  9. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  10. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  11. Gravity loop corrections to the standard model Higgs in Einstein gravity

    CERN Document Server

    Abe, Yugo; Inami, Takeo

    2016-01-01

    We study quantum gravity corrections to the standard model Higgs potential $V_{\\rm eff}(\\phi)$ $\\grave{\\rm a}$ la Coleman-Weinberg and examine the stability question of $V_{\\rm eff}(\\phi)$ around Planck mass scale, $\\mu\\simeq M_{\\rm Pl}$ ($M_{\\rm Pl}=1.22\\times10^{19}{\\rm GeV}$). We calculate the gravity one-loop corrections by using the momentum cut-off $\\Lambda$ in Einstein gravity. We show a significant difference between the effective potential $V_{\\rm eff}(\\phi)$ with and without gravity loop corrections in the energy region of $M_{\\rm Pl}$ for $\\Lambda= (1\\sim3)M_{\\rm Pl}$. We find that $V_{\\rm eff}(\\phi)$ possesses a minimum somewhere at $\\mu\\simeq M_{\\rm Pl}$; it implies that the stability condition for $V_{\\rm eff}(\\phi)$ holds after gravity corrections included.

  12. Corrected and improved model for educational simulation of neonatal cardiovascular pathophysiology.

    Science.gov (United States)

    Zijlmans, Mariken; Sá-Couto, Carla D; van Meurs, Willem L; Goodwin, Jane A; Andriessen, Peter

    2009-01-01

    We identified errors in the software implementation of the mathematical model presented in: Sá Couto CD, van Meurs WL, Goodwin JA, Andriessen P. A model for educational simulation of neonatal cardiovascular pathophysiology. Simul Healthcare 2006;1:4-12. Simulation results obtained with corrected code are presented for future reference. All but one of the simulation results do not differ by more than 9% from the previously published results. The heart rate response to acute loss of 30% of blood volume, simulated with corrected code is stronger than published target data. This modeling error was masked by errors in code implementation. We improved this response and the model by adjusting the gains and adding thresholds and saturations in the baroreflex model. General considerations on identification of model and code errors and model validity are presented.

  13. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...... reliability method; Model correction factor method; Nataf field integration; Non-Gaussion random field; Random field integration; Structural reliability; Pile foundation reliability...

  14. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  15. On the Standard Model prediction for RK and RK*

    Science.gov (United States)

    Pattori, A.

    2016-11-01

    In this article a recent work is reviewed, where we evaluated the impact of radiative corrections in RK and RK * . We find that, employing the cuts presently applied by the LHCb Collaboration, such corrections do not exceed a few percent. Moreover, their effect is well described (and corrected) by existing Monte Carlo codes. Our analysis reinforces the interest of these observables as clean probe of physics beyond the Standard Model.

  16. An employer brand predictive model for talent attraction and retention

    Directory of Open Access Journals (Sweden)

    Annelize Botha

    2011-02-01

    Full Text Available Orientation: In an ever shrinking global talent pool organisations use employer brand to attract and retain talent, however, in the absence of theoretical pointers, many organisations are losing out on a powerful business tool by not developing or maintaining their employer brand correctly. Research purpose: This study explores the current state of knowledge about employer brand and identifies the various employer brand building blocks which are conceptually integrated in a predictive model.Motivation for the study: The need for scientific progress though the accurate representation of a set of employer brand phenomena and propositions, which can be empirically tested, motivated this study.Research design, approach and method: This study was nonempirical in approach and searched for linkages between theoretical concepts by making use of relevant contextual data. Theoretical propositions which explain the identified linkages were developed for purpose of further empirical research.Main findings: Key findings suggested that employer brand is influenced by target group needs, a differentiated Employer Value Proposition (EVP, the people strategy, brand consistency, communication of the employer brand and measurement of Human Resources (HR employer branding efforts.Practical/managerial implications: The predictive model provides corporate leaders and their human resource functionaries a theoretical pointer relative to employer brand which could guide more effective talent attraction and retention decisions.Contribution/value add: This study adds to the small base of research available on employer brand and contributes to both scientific progress as well as an improved practical understanding of factors which influence employer brand.

  17. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  18. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  19. Modeling and estimating change in temporal networks via a dynamic degree corrected stochastic block model

    CERN Document Server

    Wilson, James D; Woodall, William H

    2016-01-01

    In many applications it is of interest to identify anomalous behavior within a dynamic interacting system. Such anomalous interactions are reflected by structural changes in the network representation of the system. We propose and investigate the use of a dynamic version of the degree corrected stochastic block model (DCSBM) as a means to model and monitor dynamic networks that undergo a significant structural change. Our model provides a means to simulate a variety of local and global changes in a time-varying network. Furthermore, one can efficiently detect such changes using the maximum likelihood estimates of the parameters that characterize the DCSBM. We assess the utility of the dynamic DCSBM on both simulated and real networks. Using a simple monitoring strategy on the DCSBM, we are able to detect significant changes in the U.S. Senate co-voting network that reflects both times of cohesion and times of polarization among Republican and Democratic members. Our analysis suggests that the dynamic DCSBM pr...

  20. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  1. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  2. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  3. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-03-01

    Full Text Available Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows.

  4. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Science.gov (United States)

    Morota, Gota; Boddhireddy, Prashanth; Vukasinovic, Natascha; Gianola, Daniel; DeNise, Sue

    2014-01-01

    Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV) have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP) and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows. PMID:24715901

  5. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    Science.gov (United States)

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  6. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  7. Correction of magnetotelluric static shift by analysis of 3D forward modelling and measured test data

    Science.gov (United States)

    Zhang, Kun; Wei, Wenbo; Lu, Qingtian; Wang, Huafeng; Zhang, Yawei

    2016-06-01

    To solve the problem of correction of magnetotelluric (MT) static shift, we quantise factors that influence geological environments and observation conditions and study MT static shift according to 3D MT numerical forward modelling and field tests with real data collection. We find that static shift distortions affect both the apparent resistivity and the impedance phase. The distortion results are also related to the frequency. On the basis of synthetic and real data analysis, we propose the concept of generalised static shift resistivity (GSSR) and a new method for correcting MT static shift. The approach is verified by studying 2D inversion models using synthetic and real data.

  8. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...

  9. Study on modeling of resist heating effect correction in EB mask writer EBM-9000

    Science.gov (United States)

    Nomura, Haruyuki; Kamikubo, Takashi; Suganuma, Mizuna; Kato, Yasuo; Yashima, Jun; Nakayamada, Noriaki; Anze, Hirohito; Ogasawara, Munehiro

    2015-07-01

    Resist heating effect which is caused in electron beam lithography by rise in substrate temperature of a few tens or hundreds of degrees changes resist sensitivity and leads to degradation of local critical dimension uniformity (LCDU). Increasing writing pass count and reducing dose per pass is one way to avoid the resist heating effect, but it worsens writing throughput. As an alternative way, NuFlare Technology is developing a heating effect correction system which corrects CD deviation induced by resist heating effect and mitigates LCDU degradation even in high dose per pass conditions. Our developing correction model is based on a dose modulation method. Therefore, a kind of conversion equation to modify the dose corresponding to CD change by temperature rise is necessary. For this purpose, a CD variation model depending on local pattern density was introduced and its validity was confirmed by experiments and temperature simulations. And then the dose modulation rate which is a parameter to be used in the heating effect correction system was defined as ideally irrelevant to the local pattern density, and the actual values were also determined with the experimental results for several resist types. The accuracy of the heating effect correction was also discussed. Even when deviations depending on the pattern density slightly remains in the dose modulation rates (i.e., not ideal in actual), the estimated residual errors in the correction are sufficiently small and acceptable for practical 2 pass writing with the constant dose modulation rates. In these results, it is demonstrated that the CD variation model is effective for the heating effect correction system.

  10. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  11. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  12. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  13. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  14. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  15. Thermal hydraulic test for reactor safety system - Critical heat flux experiment and development of prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Heung; Baek, Won Pil; Yang, Soo Hyung; No, Chang Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    2000-04-01

    To acquire CHF data through the experiments and develop prediction models, research was conducted. Final objectives of research are as follows: 1) Production of tube CHF data for low and middle pressure and mass flux and Flow Boiling Visualization. 2) Modification and suggestion of tube CHF prediction models. 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. The major results of research are as follows: 1) Production of the CHF data for low and middle pressure and mass flux. - Acquisition of CHF data (764) for low and middle pressure and flow conditions - Analysis of CHF trends based on the CHF data - Assessment of existing CHF prediction methods with the CHF data 2) Modification and suggestion of tube CHF prediction models. - Development of a unified CHF model applicable for a wide parametric range - Development of a threshold length correlation - Improvement of CHF look-up table using the threshold length correlation 3) Development of fuel bundle CHF prediction methodology base on tube CHF prediction models. - Development of bundle CHF prediction methodology using correction factor. 11 refs., 134 figs., 25 tabs. (Author)

  16. Comparison of data correction methods for blockage effects in semispan wing model testing

    Directory of Open Access Journals (Sweden)

    Haque Anwar U

    2016-01-01

    Full Text Available Wing alone models are usually tested in wind tunnels for aerospace applications like aircraft and hybrid buoyant aircraft. Raw data obtained from such testing is subject to different corrections such as wall interference, blockage, offset in angle of attack, dynamic pressure and free stream velocity etc. Since the flow is constrained by wind tunnel walls, therefore special emphasis is required to deliberate the limitation of correction methods for blockage correction. In the present research work, different aspects of existing correction methods are explored with the help of an example of a straight semi-span wing. Based on the results of analytical relationships of standard methods, it was found that although multiple variables are involved in the standard methods for the estimation of blockage, they are based on linearized flow theory such as source sink method and potential flow assumption etc, which have intrinsic limitations. Based on the computed and estimated experimental results, it is recommended to obtain the corrections by adding the difference in results of solid walls and far-field condition in the wind tunnel data. Computational Fluid Dynamics technique is found to be useful to determine the correction factors for a wing installed at zero spacer height/gap, with and without the tunnel wall.

  17. Comparison of data correction methods for blockage effects in semispan wing model testing

    Science.gov (United States)

    Haque, Anwar U.; Asrar, Waqar; Omar, Ashraf A.; Sulaeman, Erwin; J. S Ali, Mohamed

    2016-03-01

    Wing alone models are usually tested in wind tunnels for aerospace applications like aircraft and hybrid buoyant aircraft. Raw data obtained from such testing is subject to different corrections such as wall interference, blockage, offset in angle of attack, dynamic pressure and free stream velocity etc. Since the flow is constrained by wind tunnel walls, therefore special emphasis is required to deliberate the limitation of correction methods for blockage correction. In the present research work, different aspects of existing correction methods are explored with the help of an example of a straight semi-span wing. Based on the results of analytical relationships of standard methods, it was found that although multiple variables are involved in the standard methods for the estimation of blockage, they are based on linearized flow theory such as source sink method and potential flow assumption etc, which have intrinsic limitations. Based on the computed and estimated experimental results, it is recommended to obtain the corrections by adding the difference in results of solid walls and far-field condition in the wind tunnel data. Computational Fluid Dynamics technique is found to be useful to determine the correction factors for a wing installed at zero spacer height/gap, with and without the tunnel wall.

  18. The Simulation and Correction to the Brain Deformation Based on the Linear Elastic Model in IGS

    Institute of Scientific and Technical Information of China (English)

    MU Xiao-lan; SONG Zhi-jian

    2004-01-01

    @@ The brain deformation is a vital factor affecting the precision of the IGS and it becomes a hotspot to simulate and correct the brain deformation recently.The research organizations, which firstly resolved the brain deformation with the physical models, have the Image Processing and Analysis department of Yale University, Biomedical Modeling Lab of Vanderbilt University and so on. The former uses the linear elastic model; the latter uses the consolidation model.The linear elastic model only needs to drive the model using the surface displacement of exposed brain cortex,which is more convenient to be measured in the clinic.

  19. The localization and correction of errors in models: a constraint-based approach

    OpenAIRE

    Piechowiak, S.; Rodriguez, J

    2005-01-01

    Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.

  20. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  1. The method of soft sensor modeling for fly ash carbon content based on ARMA deviation prediction

    Science.gov (United States)

    Yang, Xiu; Yang, Wei

    2017-03-01

    The carbon content of fly ash is an important parameter in the process of boiler combustion. Aiming at the existing problems of fly ash detection, the soft measurement model was established based on PSO-SVM, and the method of deviation correction based on ARMA model was put forward on this basis, the soft sensing model was calibrated by the values which were obtained by off-line analysis at intervals. The 600 MW supercritical sliding pressure boiler was taken for research objects, the auxiliary variables were selected and the data which collected by DCS were simulated. The result shows that the prediction model for the carbon content of fly ash based on PSO-SVM is good in effect of fitting, and introducing the correction module is helpful to improve the prediction accuracy.

  2. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  3. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  4. A Fokker-Planck Model of the Boltzmann Equation with Correct Prandtl Number for Polyatomic Gases

    Science.gov (United States)

    Mathiaud, J.; Mieussens, L.

    2017-09-01

    We propose an extension of the Fokker-Planck model of the Boltzmann equation to get a correct Prandtl number in the Compressible Navier-Stokes asymptotics for polyatomic gases. This is obtained by replacing the diffusion coefficient (which is the equilibrium temperature) by a non diagonal temperature tensor, like the Ellipsoidal-Statistical model is obtained from the Bathnagar-Gross-Krook model of the Boltzmann equation, and by adding a diffusion term for the internal energy. Our model is proved to satisfy the properties of conservation and a H-theorem. A Chapman-Enskog analysis shows how to compute the transport coefficients of our model. Some numerical tests are performed to illustrate that a correct Prandtl number can be obtained.

  5. A Fokker-Planck Model of the Boltzmann Equation with Correct Prandtl Number for Polyatomic Gases

    Science.gov (United States)

    Mathiaud, J.; Mieussens, L.

    2017-07-01

    We propose an extension of the Fokker-Planck model of the Boltzmann equation to get a correct Prandtl number in the Compressible Navier-Stokes asymptotics for polyatomic gases. This is obtained by replacing the diffusion coefficient (which is the equilibrium temperature) by a non diagonal temperature tensor, like the Ellipsoidal-Statistical model is obtained from the Bathnagar-Gross-Krook model of the Boltzmann equation, and by adding a diffusion term for the internal energy. Our model is proved to satisfy the properties of conservation and a H-theorem. A Chapman-Enskog analysis shows how to compute the transport coefficients of our model. Some numerical tests are performed to illustrate that a correct Prandtl number can be obtained.

  6. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  7. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  8. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  9. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  10. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    Science.gov (United States)

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  11. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  12. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  13. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  14. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, Bas; Spreeuwers, Luuk; Veldhuis, Raymond

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  15. Terrain correction for gravity measurements, using a digital terrain model (DTM)

    NARCIS (Netherlands)

    Ketelaar, A.C.R.

    1987-01-01

    A single-term expression is given to calculate the gravitational effect for any square vertical prism with a sloping surface. A moderate measure of approximation is involved. The expression is well suited to automatic calculation of the terrain correction when a digital terrain model is available. T

  16. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  17. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  18. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  19. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  20. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    OpenAIRE

    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be ...