WorldWideScience

Sample records for models correctly predict

  1. GA-ARMA Model for Predicting IGS RTS Corrections

    Directory of Open Access Journals (Sweden)

    Mingyu Kim

    2017-01-01

    Full Text Available The global navigation satellite system (GNSS is widely used to estimate user positions. For precise positioning, users should correct for GNSS error components such as satellite orbit and clock errors as well as ionospheric delay. The international GNSS service (IGS real-time service (RTS can be used to correct orbit and clock errors in real-time. Since the IGS RTS provides real-time corrections via the Internet, intermittent data loss can occur due to software or hardware failures. We propose applying a genetic algorithm autoregressive moving average (GA-ARMA model to predict the IGS RTS corrections during data loss periods. The RTS orbit and clock corrections are predicted up to 900 s via the GA-ARMA model, and the prediction accuracies are compared with the results from a generic ARMA model. The orbit prediction performance of the GA-ARMA is nearly equivalent to that of ARMA, but GA-ARMA’s clock prediction performance is clearly better than that of ARMA, achieving a 32% error reduction. Predicted RTS corrections are applied to the broadcast ephemeris, and precise point positioning accuracies are compared. GA-ARMA shows a significant accuracy improvement over ARMA, particularly in terms of vertical positioning.

  2. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  3. Correct Models

    OpenAIRE

    Blacher, René

    2010-01-01

    Ce rapport complete les deux rapports précédents et apporte une explication plus simple aux résultats précédents : à savoir la preuve que les suites obtenues sont aléatoires.; In previous reports, we have show how to transform a text $y_n$ in a random sequence by using functions of Fibonacci $T_q$. Now, in this report, we obtain a clearer result by proving that $T_q(y_n)$ has the IID model as correct model. But, it is necessary to define correctly a correct model. Then, we study also this pro...

  4. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  5. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  6. Wind Power Prediction Based on LS-SVM Model with Error Correction

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2017-02-01

    Full Text Available As conventional energy sources are non-renewable, the world's major countries are investing heavily in renewable energy research. Wind power represents the development trend of future energy, but the intermittent and volatility of wind energy are the main reasons that leads to the poor accuracy of wind power prediction. However, by analyzing the error level at different time points, it can be found that the errors of adjacent time are often approximately the same, the least square support vector machine (LS-SVM model with error correction is used to predict the wind power in this paper. According to the simulation of wind power data of two wind farms, the proposed method can effectively improve the prediction accuracy of wind power, and the error distribution is concentrated almost without deviation. The improved method proposed in this paper takes into account the error correction process of the model, which improved the prediction accuracy of the traditional model (RBF, Elman, LS-SVM. Compared with the single LS-SVM prediction model in this paper, the mean absolute error of the proposed method had decreased by 52 percent. The research work in this paper will be helpful to the reasonable arrangement of dispatching operation plan, the normal operation of the wind farm and the large-scale development as well as fully utilization of renewable energy resources.

  7. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  8. A Model-Based Temperature-Prediction Method by Temperature-Induced Spectral Variation and Correction of the Temperature Effect.

    Science.gov (United States)

    Yang, Qhi-xiao; Peng, Si-long; Shan, Peng; Bi, Yi-ming; Tang, Liang; Xie, Qiong

    2015-05-01

    In the present paper, a new model-based method was proposed for temperature prediction and correction. First, a temperature prediction model was obtained from training samples; then, the temperature of test samples were predicted; and finally, the correction model was used to reduce the nonlinear effects of spectra from temperature variations. Two experiments were used to verify the proposed method, including a water-ethanol mixture experiment and a ternary mixture experiment. The results show that, compared with classic method such as continuous piecewise direct standardization (CPDS), our method is efficient for temperature correction. Furthermore, the temperatures of test samples are not necessary in the proposed method, making it easier to use in real applications.

  9. Predictive modeling for corrective maintenance of imaging devices from machine logs.

    Science.gov (United States)

    Patil, Ravindra B; Patil, Meru A; Ravi, Vidya; Naik, Sarif

    2017-07-01

    In the cost sensitive healthcare industry, an unplanned downtime of diagnostic and therapy imaging devices can be a burden on the financials of both the hospitals as well as the original equipment manufacturers (OEMs). In the current era of connectivity, it is easier to get these devices connected to a standard monitoring station. Once the system is connected, OEMs can monitor the health of these devices remotely and take corrective actions by providing preventive maintenance thereby avoiding major unplanned downtime. In this article, we present an overall methodology of predicting failure of these devices well before customer experiences it. We use data-driven approach based on machine learning to predict failures in turn resulting in reduced machine downtime, improved customer satisfaction and cost savings for the OEMs. One of the use-case of predicting component failure of PHILIPS iXR system is explained in this article.

  10. A stochastic model correctly predicts changes in budding yeast cell cycle dynamics upon periodic expression of CLN2.

    Directory of Open Access Journals (Sweden)

    Cihan Oguz

    Full Text Available In this study, we focus on a recent stochastic budding yeast cell cycle model. First, we estimate the model parameters using extensive data sets: phenotypes of 110 genetic strains, single cell statistics of wild type and cln3 strains. Optimization of stochastic model parameters is achieved by an automated algorithm we recently used for a deterministic cell cycle model. Next, in order to test the predictive ability of the stochastic model, we focus on a recent experimental study in which forced periodic expression of CLN2 cyclin (driven by MET3 promoter in cln3 background has been used to synchronize budding yeast cell colonies. We demonstrate that the model correctly predicts the experimentally observed synchronization levels and cell cycle statistics of mother and daughter cells under various experimental conditions (numerical data that is not enforced in parameter optimization, in addition to correctly predicting the qualitative changes in size control due to forced CLN2 expression. Our model also generates a novel prediction: under frequent CLN2 expression pulses, G1 phase duration is bimodal among small-born cells. These cells originate from daughters with extended budded periods due to size control during the budded period. This novel prediction and the experimental trends captured by the model illustrate the interplay between cell cycle dynamics, synchronization of cell colonies, and size control in budding yeast.

  11. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  12. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  13. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  14. Three-dimensional transport coefficient model and prediction-correction numerical method for thermal margin analysis of PWR cores

    International Nuclear Information System (INIS)

    Chiu, C.

    1981-01-01

    Combustion Engineering Inc. designs its modern PWR reactor cores using open-core thermal-hydraulic methods where the mass, momentum and energy equations are solved in three dimensions (one axial and two lateral directions). The resultant fluid properties are used to compute the minimum Departure from Nuclear Boiling Ratio (DNBR) which ultimately sets the power capability of the core. The on-line digital monitoring and protection systems require a small fast-running algorithm of the design code. This paper presents two techniques used in the development of the on-line DNB algorithm. First, a three-dimensional transport coefficient model is introduced to radially group the flow subchannel into channels for the thermal-hydraulic fluid properties calculation. Conservation equations of mass, momentum and energy for this channels are derived using transport coefficients to modify the calculation of the radial transport of enthalpy and momentum. Second, a simplified, non-iterative numerical method, called the prediction-correction method, is applied together with the transport coefficient model to reduce the computer execution time in the determination of fluid properties. Comparison of the algorithm and the design thermal-hydraulic code shows agreement to within 0.65% equivalent power at a 95/95 confidence/probability level for all normal operating conditions of the PWR core. This algorithm accuracy is achieved with 1/800th of the computer processing time of its parent design code. (orig.)

  15. Innovation in prediction planning for anterior open bite correction.

    Science.gov (United States)

    Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf

    2015-05-01

    This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.

  16. A predictive model of suitability for minimally invasive parathyroid surgery in the treatment of primary hyperparathyroidism [corrected].

    LENUS (Irish Health Repository)

    Kavanagh, Dara O

    2012-05-01

    Improved preoperative localizing studies have facilitated minimally invasive approaches in the treatment of primary hyperparathyroidism (PHPT). Success depends on the ability to reliably select patients who have PHPT due to single-gland disease. We propose a model encompassing preoperative clinical, biochemical, and imaging studies to predict a patient\\'s suitability for minimally invasive surgery.

  17. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  18. An Empirical Correction Method for Improving off-Axes Response Prediction in Component Type Flight Mechanics Helicopter Models

    Science.gov (United States)

    Mansur, M. Hossein; Tischler, Mark B.

    1997-01-01

    Historically, component-type flight mechanics simulation models of helicopters have been unable to satisfactorily predict the roll response to pitch stick input and the pitch response to roll stick input off-axes responses. In the study presented here, simple first-order low-pass filtering of the elemental lift and drag forces was considered as a means of improving the correlation. The method was applied to a blade-element model of the AH-64 APache, and responses of the modified model were compared with flight data in hover and forward flight. Results indicate that significant improvement in the off-axes responses can be achieved in hover. In forward flight, however, the best correlation in the longitudinal and lateral off-axes responses required different values of the filter time constant for each axis. A compromise value was selected and was shown to result in good overall improvement in the off-axes responses. The paper describes both the method and the model used for its implementation, and presents results obtained at hover and in forward flight.

  19. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  20. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. Wall Correction Model for Wind Tunnels with Open Test Section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2004-01-01

    In th paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, that is based on a onedimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. Generally......, the corrections from the model are in very good agreement with the CFD computaions, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections. Keywords: Wind tunnel correction, momentum theory...

  2. Correcting Memory Improves Accuracy of Predicted Task Duration

    Science.gov (United States)

    Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.

    2008-01-01

    People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…

  3. Using Data Mining Techniques to Investigate Relationships between Observed Aircraft Temperature and Relative Humidity Data for Numerical Weather Prediction Model Bias Correction

    Science.gov (United States)

    MacCracken, R. F.; Collard, A.

    2016-12-01

    There has been a known issue for the past 40+ years that certain types of commercial aircraft temperature sensors will report temperatures lower due to evaporative cooling effects if the sensor is too wet due to flying through cumulus clouds. Over the years, Aircraft Instrumentation manuals and several studies have investigated this issue and have found that the temperature sensor measures report temperatures that are closer to the web bulb temperature instead of actual temperature, a 1-3°C temperature difference. Most studies are experimental field campaigns, since there are very few moisture observations that are collocated with temperature observations to test and correct for this issue. This issue has begun to be investigated for purposes of bias correction for the Gridpoint Statistical Interpolation (GSI) system/Global Forecast System (GFS) models. NOAA/Environmental Modeling Center (EMC) has received several new aircraft datasets, such as the Water Vapor Sensing System (WVSS) and Tropospheric Airborne Meteorological Data Reporting (TAMDAR) data, which will provide the opportunity to investigate collocated temperature and moisture observations. Initial standard correlation analysis shows moderate correlations between high values of specific humidity and temperature. To investigate further relationships, a data mining algorithm will be utilized. A decision tree utilizing the C4.5 algorithm was generated to illustrate the relationships between temperature, moisture, height and winds. Results from this data mining study will be presented.

  4. Temperature-Corrected Model of Turbulence in Hot Jet Flows

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Massey, Steven J.; Elmiligui, Alaa

    2007-01-01

    An improved correction has been developed to increase the accuracy with which certain formulations of computational fluid dynamics predict mixing in shear layers of hot jet flows. The CFD formulations in question are those derived from the Reynolds-averaged Navier-Stokes equations closed by means of a two-equation model of turbulence, known as the k-epsilon model, wherein effects of turbulence are summarized by means of an eddy viscosity. The need for a correction arises because it is well known among specialists in CFD that two-equation turbulence models, which were developed and calibrated for room-temperature, low Mach-number, plane-mixing-layer flows, underpredict mixing in shear layers of hot jet flows. The present correction represents an attempt to account for increased mixing that takes place in jet flows characterized by high gradients of total temperature. This correction also incorporates a commonly accepted, previously developed correction for the effect of compressibility on mixing.

  5. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... the exchange of axial momentum between the tunnel and the ambient room is represented by a simple formula, derived from actuator disc computations. The correction model is validated against Navier-Stokes computations of the flow about a wind turbine rotor. Generally, the corrections from the model are in very...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  6. Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly

    Science.gov (United States)

    Eastwell, Peter

    2012-01-01

    This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…

  7. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  8. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  9. Health beliefs affect the correct replacement of daily disposable contact lenses: Predicting compliance with the Health Belief Model and the Theory of Planned Behaviour.

    Science.gov (United States)

    Livi, Stefano; Zeri, Fabrizio; Baroni, Rossella

    2017-02-01

    To assess the compliance of Daily Disposable Contact Lenses (DDCLs) wearers with replacing lenses at a manufacturer-recommended replacement frequency. To evaluate the ability of two different Health Behavioural Theories (HBT), The Health Belief Model (HBM) and The Theory of Planned Behaviour (TPB), in predicting compliance. A multi-centre survey was conducted using a questionnaire completed anonymously by contact lens wearers during the purchase of DDCLs. Three hundred and fifty-four questionnaires were returned. The survey comprised 58.5% females and 41.5% males (mean age 34±12years). Twenty-three percent of respondents were non-compliant with manufacturer-recommended replacement frequency (re-using DDCLs at least once). The main reason for re-using DDCLs was "to save money" (35%). Predictions of compliance behaviour (past behaviour or future intentions) on the basis of the two HBT was investigated through logistic regression analysis: both TPB factors (subjective norms and perceived behavioural control) were significant (preplacement is widespread, affecting 1 out of 4 Italian wearers. Results from the TPB model show that the involvement of persons socially close to the wearers (subjective norms) and the improvement of the procedure of behavioural control of daily replacement (behavioural control) are of paramount importance in improving compliance. With reference to the HBM, it is important to warn DDCLs wearers of the severity of a contact-lens-related eye infection, and to underline the possibility of its prevention. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  10. An Assessment of Drift Correction Alternatives for CMIP5 Decadal Predictions

    Science.gov (United States)

    Choudhury, Dipayan; Sen Gupta, Alexander; Sharma, Ashish; Mehrotra, Rajeshwar; Sivakumar, Bellie

    2017-10-01

    Drift correction is an important step before using the outputs of decadal prediction experiments and has seen considerable research. However, most drift correction studies consider a relatively small sample of variables and models. Here, we present a systematic application of the existing drift correction strategies for decadal predictions of various sea surface temperature-based metrics from a suite of five state-of-the-art climate models (CanCM4i1, GFDL-CM2.1, HadCM3i2&i3, MIROC5, and MPI-ESM-LR). The best method of drift correction for each metric and model is reported. Preliminary analysis suggests that there is no single method of drift correction that consistently performs best. Initial condition-based drift correction provides the lowest errors in most regions for MIROC5 and the two HadCM3 models, whereas the trend-based drift correction produces lowest errors for CanCM4i1, GFDL-CM2.1, and MPI-ESM-LR over the largest share of the area. There is no merit in using a k-nearest neighbor approach for these drift correction methods. Further, in almost all cases, the multimodel ensemble outperforms the individual models, and thus, the study conclusively suggests using forecasts based on multimodel averages. We also show some additional benefit to be gained by drift correcting each model/metric using their best correction method prior to model averaging and suggest that the results presented here would help potential users expend time and resources judiciously while dealing with outputs from these experiments.

  11. Causal MRI reconstruction via Kalman prediction and compressed sensing correction.

    Science.gov (United States)

    Majumdar, Angshul

    2017-06-01

    This technical note addresses the problem of causal online reconstruction of dynamic MRI, i.e. given the reconstructed frames till the previous time instant, we reconstruct the frame at the current instant. Our work follows a prediction-correction framework. Given the previous frames, the current frame is predicted based on a Kalman estimate. The difference between the estimate and the current frame is then corrected based on the k-space samples of the current frame; this reconstruction assumes that the difference is sparse. The method is compared against prior Kalman filtering based techniques and Compressed Sensing based techniques. Experimental results show that the proposed method is more accurate than these and considerably faster. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  13. A Blast Wave Model With Viscous Corrections

    Science.gov (United States)

    Yang, Z.; Fries, R. J.

    2017-04-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small.

  14. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  15. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    only present in melanoma patients and thus were strongly associated with melanoma. The percentage of correctly classified subjects in the LR model was 74.9%, sensitivity 71%, specificity 78.7% and AUC 0.805. For the ADT percentage of correctly classified instances was 71.9%, sensitivity 71.9%, specificity 79.4% and AUC 0.808. Conclusion. Application of different models for risk assessment and prediction of melanoma should provide efficient and standardized tool in the hands of clinicians. The presented models offer effective discrimination of individuals at high risk, transparent decision making and real-time implementation suitable for clinical practice. A continuous melanoma database growth would provide for further adjustments and enhancements in model accuracy as well as offering a possibility for successful application of more advanced data mining algorithms.

  16. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  17. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study...

  18. Evaluation of multiple protein docking structures using correctly predicted pairwise subunits

    Directory of Open Access Journals (Sweden)

    Esquivel-Rodríguez Juan

    2012-03-01

    Full Text Available Abstract Background Many functionally important proteins in a cell form complexes with multiple chains. Therefore, computational prediction of multiple protein complexes is an important task in bioinformatics. In the development of multiple protein docking methods, it is important to establish a metric for evaluating prediction results in a reasonable and practical fashion. However, since there are only few works done in developing methods for multiple protein docking, there is no study that investigates how accurate structural models of multiple protein complexes should be to allow scientists to gain biological insights. Methods We generated a series of predicted models (decoys of various accuracies by our multiple protein docking pipeline, Multi-LZerD, for three multi-chain complexes with 3, 4, and 6 chains. We analyzed the decoys in terms of the number of correctly predicted pair conformations in the decoys. Results and conclusion We found that pairs of chains with the correct mutual orientation exist even in the decoys with a large overall root mean square deviation (RMSD to the native. Therefore, in addition to a global structure similarity measure, such as the global RMSD, the quality of models for multiple chain complexes can be better evaluated by using the local measurement, the number of chain pairs with correct mutual orientation. We termed the fraction of correctly predicted pairs (RMSD at the interface of less than 4.0Å as fpair and propose to use it for evaluation of the accuracy of multiple protein docking.

  19. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  20. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  1. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    Science.gov (United States)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  2. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  3. Corrections to the free-nucleon values of the single-particle matrix elements of the M1 and Gamow-Teller operators, from a comparison of shell-model predictions with sd-shell data

    International Nuclear Information System (INIS)

    Brown, B.A.; Wildenthal, B.H.

    1983-01-01

    The magnetic dipole moments of states in mirror pairs of the sd-shell nuclei and the strengths of the Gamow-Teller beta decays which connect them are compared with predictions based on mixed-configuration shell-model wave functions. From this analysis we extract the average effective values of the single-particle matrix elements of the l, s, and [Y/sup( 2 )xs]/sup( 1 ) components of the M1 and Gamow-Teller operators acting on nucleons in the 0d/sub 5/2/, 1s/sub 1/2/, and 0d/sub 3/2/ orbits. These results are compared with the recent calculations by Towner and Khanna of the corrections to the free-nucleon values of these matrix elements which arise from the effects of isobar currents, mesonic-exchange currents, and mixing with configurations outside the sd shell

  4. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  5. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation.

    Science.gov (United States)

    Vincent, Mark A; Hillier, Ian H

    2014-08-25

    The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.

  6. "When does making detailed predictions make predictions worse?": Correction to Kelly and Simmons (2016).

    Science.gov (United States)

    2016-10-01

    Reports an error in "When Does Making Detailed Predictions Make Predictions Worse" by Theresa F. Kelly and Joseph P. Simmons ( Journal of Experimental Psychology: General , Advanced Online Publication, Aug 8, 2016, np). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-37952-001.) In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions. PsycINFO Database Record (c) 2016 APA, all rights reserved

  7. Multi-scheme corrected dynamic-analogue prediction of summer precipitation in northeastern China based on BCC_CSM

    Science.gov (United States)

    Fang, Yihe; Chen, Haishan; Gong, Zhiqiang; Xu, Fangshu; Zhao, Chunyu

    2017-12-01

    Based on summer precipitation hindcasts for 1991-2013 produced by the Beijing Climate Center Climate System Model (BCC_CSM), the relationship between precipitation prediction error in northeastern China (NEC) and global sea surface temperature is analyzed, and dynamic-analogue prediction is carried out to improve the summer precipitation prediction skill of BCC_CSM, through taking care of model historical analogue prediction error in the real-time output. Seven correction schemes such as the systematic bias correction, pure statistical correction, dynamic-analogue correction, and so on, are designed and compared. Independent hindcast results show that the 5-yr average anomaly correlation coefficient (ACC) of summer precipitation is respectively improved from -0.13/0.15 to 0.16/0.24 for 2009-13/1991-95 when using the equally weighted dynamic-analogue correction in the BCC_CSM prediction, which takes the arithmetical mean of the correction based on regional average error and that on grid point error. In addition, probabilistic prediction using the results from the multiple correction schemes is also performed and it leads to further improved 5-yr average prediction accuracy.

  8. The similarity principle - on using models correctly

    DEFF Research Database (Denmark)

    Landberg, L.; Mortensen, N.G.; Rathmann, O.

    2003-01-01

    This paper will present some guiding principles on the most accurate use of the WAsP program in particular, but the principle can be applied to the use of any linear model which predicts some quantity at one location based on another. We have felt a need to lay out these principles out explicitly......, due to the many, many users and the uses (and misuses) of the WAsP program. Put simply, the similarity principle states that one should chose a predictor site which – in as many ways as possible – is similar to the predicted site....

  9. A study on predicting network corrections in PPP-RTK processing

    Science.gov (United States)

    Wang, Kan; Khodabandeh, Amir; Teunissen, Peter

    2017-10-01

    In PPP-RTK processing, the network corrections including the satellite clocks, the satellite phase biases and the ionospheric delays are provided to the users to enable fast single-receiver integer ambiguity resolution. To solve the rank deficiencies in the undifferenced observation equations, the estimable parameters are formed to generate full-rank design matrix. In this contribution, we firstly discuss the interpretation of the estimable parameters without and with a dynamic satellite clock model incorporated in a Kalman filter during the network processing. The functionality of the dynamic satellite clock model is tested in the PPP-RTK processing. Due to the latency generated by the network processing and data transfer, the network corrections are delayed for the real-time user processing. To bridge the latencies, we discuss and compare two prediction approaches making use of the network corrections without and with the dynamic satellite clock model, respectively. The first prediction approach is based on the polynomial fitting of the estimated network parameters, while the second approach directly follows the dynamic model in the Kalman filter of the network processing and utilises the satellite clock drifts estimated in the network processing. Using 1 Hz data from two networks in Australia, the influences of the two prediction approaches on the user positioning results are analysed and compared for latencies ranging from 3 to 10 s. The accuracy of the positioning results decreases with the increasing latency of the network products. For a latency of 3 s, the RMS of the horizontal and the vertical coordinates (with respect to the ground truth) do not show large differences applying both prediction approaches. For a latency of 10 s, the prediction approach making use of the satellite clock model has generated slightly better positioning results with the differences of the RMS at mm-level. Further advantages and disadvantages of both prediction approaches are

  10. Heat transfer corrected isothermal model for devolatilization of thermally-thick biomass particles

    DEFF Research Database (Denmark)

    Luo, Hao; Wu, Hao; Lin, Weigang

    Isothermal model used in current computational fluid dynamic (CFD) model neglect the internal heat transfer during biomass devolatilization. This assumption is not reasonable for thermally-thick particles. To solve this issue, a heat transfer corrected isothermal model is introduced. In this model......, two heat transfer corrected coefficients: HT-correction of heat transfer and HR-correction of reaction, are defined to cover the effects of internal heat transfer. A series of single biomass devitalization case have been modeled to validate this model, the results show that devolatilization behaviors...... of both thermally-thick and thermally-thin particles are predicted reasonable by using heat transfer corrected model, while, isothermal model overestimate devolatilization rate and heating rate for thermlly-thick particle.This model probably has better performance than isothermal model when it is coupled...

  11. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  12. "The next Big Five Inventory (BFI-2): Developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power": Correction to Soto and John (2016).

    Science.gov (United States)

    2017-07-01

    Reports an error in "The Next Big Five Inventory (BFI-2): Developing and Assessing a Hierarchical Model With 15 Facets to Enhance Bandwidth, Fidelity, and Predictive Power" by Christopher J. Soto and Oliver P. John ( Journal of Personality and Social Psychology , Advanced Online Publication, Apr 7, 2016, np). In the article, all citations to McCrae and Costa (2008), except for the instance in which it appears in the first paragraph of the introduction, should instead appear as McCrae and Costa (2010). The complete citation should read as follows: McCrae, R. R., & Costa, P. T. (2010). NEO Inventories professional manual. Lutz, FL: Psychological Assessment Resources. The attribution to the BFI-2 items that appears in the Table 6 note should read as follows: BFI-2 items adapted from "Conceptualization, Development, and Initial Validation of the Big Five Inventory-2," by C. J. Soto and O. P. John, 2015, Paper presented at the biennial meeting of the Association for Research in Personality. Copyright 2015 by Oliver P. John and Christopher J. Soto. The complete citation in the References list should appear as follows: Soto, C. J., & John, O. P. (2015, June). Conceptualization, development, and initial validation of the Big Five Inventory-2. Paper presented at the biennial meeting of the Association for Research in Personality, St. Louis, MO. Available from http://www.colby.edu/psych/personality-lab/ All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-17156-001.) Three studies were conducted to develop and validate the Big Five Inventory-2 (BFI-2), a major revision of the Big Five Inventory (BFI). Study 1 specified a hierarchical model of personality structure with 15 facet traits nested within the Big Five domains, and developed a preliminary item pool to measure this structure. Study 2 used conceptual and empirical criteria to construct the BFI-2 domain and facet scales from the preliminary item pool

  13. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  14. Predicting the sparticle spectrum from GUTs via SUSY threshold corrections with SusyTC

    Energy Technology Data Exchange (ETDEWEB)

    Antusch, Stefan [Department of Physics, University of Basel,Klingelbergstr. 82, CH-4056 Basel (Switzerland); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 München (Germany); Sluka, Constantin [Department of Physics, University of Basel,Klingelbergstr. 82, CH-4056 Basel (Switzerland)

    2016-07-21

    Grand Unified Theories (GUTs) can feature predictions for the ratios of quark and lepton Yukawa couplings at high energy, which can be tested with the increasingly precise results for the fermion masses, given at low energies. To perform such tests, the renormalization group (RG) running has to be performed with sufficient accuracy. In supersymmetric (SUSY) theories, the one-loop threshold corrections (TC) are of particular importance and, since they affect the quark-lepton mass relations, link a given GUT flavour model to the sparticle spectrum. To accurately study such predictions, we extend and generalize various formulas in the literature which are needed for a precision analysis of SUSY flavour GUT models. We introduce the new software tool SusyTC, a major extension to the Mathematica package REAP http://dx.doi.org/10.1088/1126-6708/2005/03/024, where these formulas are implemented. SusyTC extends the functionality of REAP by a full inclusion of the (complex) MSSM SUSY sector and a careful calculation of the one-loop SUSY threshold corrections for the full down-type quark, up-type quark and charged lepton Yukawa coupling matrices in the electroweak-unbroken phase. Among other useful features, SusyTC calculates the one-loop corrected pole mass of the charged (or the CP-odd) Higgs boson as well as provides output in SLHA conventions, i.e. the necessary input for external software, e.g. for performing a two-loop Higgs mass calculation. We apply SusyTC to study the predictions for the parameters of the CMSSM (mSUGRA) SUSY scenario from the set of GUT scale Yukawa relations ((y{sub e})/(y{sub d}))=−(1/2), ((y{sub μ})/(y{sub s}))=6, and ((y{sub τ})/(y{sub b}))=−(3/2), which has been proposed recently in the context of SUSY GUT flavour models.

  15. Improved Blood Pressure Prediction Using Systolic Flow Correction of Pulse Wave Velocity.

    Science.gov (United States)

    Lillie, Jeffrey S; Liberson, Alexander S; Borkholder, David A

    2016-12-01

    Hypertension is a significant worldwide health issue. Continuous blood pressure monitoring is important for early detection of hypertension, and for improving treatment efficacy and compliance. Pulse wave velocity (PWV) has the potential to allow for a continuous blood pressure monitoring device; however published studies demonstrate significant variability in this correlation. In a recently presented physics-based mathematical model of PWV, flow velocity is additive to the classic pressure wave as estimated by arterial material properties, suggesting flow velocity correction may be important for cuff-less non-invasive blood pressure measures. The present study examined the impact of systolic flow correction of a measured PWV on blood pressure prediction accuracy using data from two published in vivo studies. Both studies examined the relationship between PWV and blood pressure under pharmacological manipulation, one in mongrel dogs and the other in healthy adult males. Systolic flow correction of the measured PWV improves the R 2 correlation to blood pressure from 0.51 to 0.75 for the mongrel dog study, and 0.05 to 0.70 for the human subjects study. The results support the hypothesis that systolic flow correction is an essential element of non-invasive, cuff-less blood pressure estimation based on PWV measures.

  16. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  17. IMPACT OF DIFFERENT TOPOGRAPHIC CORRECTIONS ON PREDICTION ACCURACY OF FOLIAGE PROJECTIVE COVER (FPC IN A TOPOGRAPHICALLY COMPLEX TERRAIN

    Directory of Open Access Journals (Sweden)

    S. Ediriweera

    2012-07-01

    Full Text Available Quantitative retrieval of land surface biological parameters (e.g. foliage projective cover [FPC] and Leaf Area Index is crucial for forest management, ecosystem modelling, and global change monitoring applications. Currently, remote sensing is a widely adopted method for rapid estimation of surface biological parameters in a landscape scale. Topographic correction is a necessary pre-processing step in the remote sensing application for topographically complex terrain. Selection of a suitable topographic correction method on remotely sensed spectral information is still an unresolved problem. The purpose of this study is to assess the impact of topographic corrections on the prediction of FPC in hilly terrain using an established regression model. Five established topographic corrections [C, Minnaert, SCS, SCS+C and processing scheme for standardised surface reflectance (PSSSR] were evaluated on Landsat TM5 acquired under low and high sun angles in closed canopied subtropical rainforest and eucalyptus dominated open canopied forest, north-eastern Australia. The effectiveness of methods at normalizing topographic influence, preserving biophysical spectral information, and internal data variability were assessed by statistical analysis and by comparing field collected FPC data. The results of statistical analyses show that SCS+C and PSSSR perform significantly better than other corrections, which were on less overcorrected areas of faintly illuminated slopes. However, the best relationship between FPC and Landsat spectral responses was obtained with the PSSSR by producing the least residual error. The SCS correction method was poor for correction of topographic effect in predicting FPC in topographically complex terrain.

  18. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  19. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  20. Holographic p-wave superconductor models with Weyl corrections

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-04-01

    Full Text Available We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  1. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  2. Mass corrections to Green functions in instanton vacuum model

    International Nuclear Information System (INIS)

    Esaibegyan, S.V.; Tamaryan, S.N.

    1987-01-01

    The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs

  3. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  4. BDDCS Predictions, Self-Correcting Aspects of BDDCS Assignments, BDDCS Assignment Corrections, and Classification for more than 175 Additional Drugs.

    Science.gov (United States)

    Hosey, Chelsea M; Chan, Rosa; Benet, Leslie Z

    2016-01-01

    The biopharmaceutics drug disposition classification system was developed in 2005 by Wu and Benet as a tool to predict metabolizing enzyme and drug transporter effects on drug disposition. The system was modified from the biopharmaceutics classification system and classifies drugs according to their extent of metabolism and their water solubility. By 2010, Benet et al. had classified over 900 drugs. In this paper, we incorporate more than 175 drugs into the system and amend the classification of 13 drugs. We discuss current and additional applications of BDDCS, which include predicting drug-drug and endogenous substrate interactions, pharmacogenomic effects, food effects, elimination routes, central nervous system exposure, toxicity, and environmental impacts of drugs. When predictions and classes are not aligned, the system detects an error and is able to self-correct, generally indicating a problem with initial class assignment and/or measurements determining such assignments.

  5. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    orders, corrections of increasing polynomial complexity can be applied to the approximation. The second order provides a correction in quadratic time, which we apply to an array of Gaussian process and Ising models. The corrections generalize to arbitrarily complex approximating families, which we...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  6. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  7. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  8. Inflation via logarithmic entropy-corrected holographic dark energy model

    International Nuclear Information System (INIS)

    Darabi, F.; Felegary, F.; Setare, M.R.

    2016-01-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  9. North Atlantic climate model bias influence on multiyear predictability

    Science.gov (United States)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  10. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  11. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  12. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  13. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  14. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  15. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  16. Multisite bias correction of precipitation data from regional climate models

    Czech Academy of Sciences Publication Activity Database

    Hnilica, Jan; Hanel, M.; Puš, V.

    2017-01-01

    Roč. 37, č. 6 (2017), s. 2934-2946 ISSN 0899-8418 R&D Projects: GA ČR GA16-05665S Grant - others:Grantová agentura ČR - GA ČR(CZ) 16-16549S Institutional support: RVO:67985874 Keywords : bias correction * regional climate model * correlation * covariance * multivariate data * multisite correction * principal components * precipitation Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Climatic research Impact factor: 3.760, year: 2016

  17. Multisite bias correction of precipitation data from regional climate models

    Czech Academy of Sciences Publication Activity Database

    Hnilica, Jan; Hanel, M.; Puš, V.

    2017-01-01

    Roč. 37, č. 6 (2017), s. 2934-2946 ISSN 0899-8418 R&D Projects: GA ČR GA16-05665S Grant - others:Grantová agentura ČR - GA ČR(CZ) 16-16549S Institutional support: RVO:67985874 Keywords : bias correction * regional climate model * correlation * covariance * multivariate data * multisite correction * principal components * precipitation Subject RIV: DA - Hydrology ; Limnology Impact factor: 3.760, year: 2016

  18. Precise predictions of H2O line shapes over a wide pressure range using simulations corrected by a single measurement

    Science.gov (United States)

    Ngo, N. H.; Nguyen, H. T.; Tran, H.

    2018-03-01

    In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.

  19. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  20. NLO electroweak corrections in general scalar singlet models

    Science.gov (United States)

    Costa, Raul; Sampaio, Marco O. P.; Santos, Rui

    2017-07-01

    If no new physics signals are found, in the coming years, at the Large Hadron Collider Run-2, an increase in precision of the Higgs couplings measurements will shift the discussion to the effects of higher order corrections. In Beyond the Standard Model (BSM) theories this may become the only tool to probe new physics. Extensions of the Standard Model (SM) with several scalar singlets may address several of its problems, namely to explain dark matter, the matter-antimatter asymmetry, or to improve the stability of the SM up to the Planck scale. In this work we propose a general framework to calculate one loop-corrections to the propagators and to the scalar field vacuum expectation values of BSM models with an arbitrary number of scalar singlets. We then apply our method to a real and to a complex scalar singlet models. We assess the importance of the one-loop radiative corrections first by computing them for a tree level mixing sum constraint, and then for the main Higgs production process gg → H. We conclude that, for the currently allowed parameter space of these models, the corrections can be at most a few percent. Notably, a non-zero correction can survive when dark matter is present, in the SM-like limit of the Higgs couplings to other SM particles.

  1. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  2. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  4. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  5. Prediction of Deformity Correction by Pedicle Screw Instrumentation in Thoracolumbar Scoliosis Surgery

    Science.gov (United States)

    Kiriyama, Yoshimori; Yamazaki, Nobutoshi; Nagura, Takeo; Matsumoto, Morio; Chiba, Kazuhiro; Toyama, Yoshiaki

    In segmental pedicle screw instrumentation, the relationship between the combinations of pedicle screw placements and the degree of deformity correction was investigated with a three-dimensional rigid body and spring model. The virtual thoracolumbar scoliosis (Cobb’s angle of 47 deg.) was corrected using six different combinations of pedicle-screw placements. As a result, better correction in the axial rotation was obtained with the pedicle screws placed at or close to the apical vertebra than with the screws placed close to the end vertebrae, while the correction in the frontal plane was better with the screws close to the end vertebrae than with those close to the apical vertebra. Additionally, two screws placed in the convex side above and below the apical vertebra provided better correction than two screws placed in the concave side. Effective deformity corrections of scoliosis were obtained with the proper combinations of pedicle screw placements.

  6. Predicting the Feasibility of Correcting Mechanical Axis in Large Varus Deformities With Unicompartmental Knee Arthroplasty.

    Science.gov (United States)

    Kleeblad, Laura J; van der List, Jelle P; Pearle, Andrew D; Fragomen, Austin T; Rozbruch, S Robert

    2018-02-01

    Due to disappointing historical outcomes of unicompartmental knee arthroplasty (UKA), Kozinn and Scott proposed strict selection criteria, including preoperative varus alignment of ≤15°, to improve the outcomes of UKA. No studies to date, however, have assessed the feasibility of correcting large preoperative varus deformities with UKA surgery. The study goals were therefore to (1) assess to what extent patients with large varus deformities could be corrected and (2) determine radiographic parameters to predict sufficient correction. In 200 consecutive robotic-arm assisted medial UKA patients with large preoperative varus deformities (≥7°), the mechanical axis angle (MAA) and joint line convergence angle (JLCA) were measured on hip-knee-ankle radiographs. It was assessed what number of patients were corrected to optimal (≤4°) and acceptable (5°-7°) alignment, and whether the feasibility of this correction could be predicted using an estimated MAA (eMAA, preoperative MAA-JLCA) using regression analyses. Mean preoperative MAA was 10° of varus (range, 7°-18°), JLCA was 5° (1°-12°), postoperative MAA was 4° of varus (-3° to 8°), and correction was 6° (1°-14°). Postoperative optimal alignment was achieved in 62% and acceptable alignment in 36%. The eMAA was a significant predictor for optimal postoperative alignment, when corrected for age and gender (P varus deformities (7°-18°) could be considered candidates for medial UKA, as 98% was corrected to optimal or acceptable alignment, although cautious approach is needed in deformities >15°. Furthermore, it was noted that the feasibility of achieving optimal alignment could be predicted using the preoperative MAA, JLCA, and age. Published by Elsevier Inc.

  7. Corrected Four-Sphere Head Model for EEG Signals

    Directory of Open Access Journals (Sweden)

    Solveig Næss

    2017-10-01

    Full Text Available The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF, skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM. We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  8. Simulation Model for Correction and Modeling of Probe Head Errors in Five-Axis Coordinate Systems

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2016-05-01

    Full Text Available Simulative methods are nowadays frequently used in metrology for the simulation of measurement uncertainty and the prediction of errors that may occur during measurements. In coordinate metrology, such methods are primarily used with the typical three-axis Coordinate Measuring Machines (CMMs, and lately, also with mobile measuring systems. However, no similar simulative models have been developed for five-axis systems in spite of their growing popularity in recent years. This paper presents the numerical model of probe head errors for probe heads that are used in five-axis coordinate systems. The model is based on measurements of material standards (standard ring and the use of the Monte Carlo method combined with select interpolation methods. The developed model may be used in conjunction with one of the known models of CMM kinematic errors to form a virtual model of a five-axis coordinate system. In addition, the developed methodology allows for the correction of identified probe head errors, thus improving measurement accuracy. Subsequent verification tests prove the correct functioning of the presented model.

  9. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  10. A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2009-09-01

    Full Text Available This paper presents a novel two-step autonomous navigation method for search and rescue robot. The algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM trained off-line with the texture feature and color features. And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper. The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM. The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment.

  11. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing for linea...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis......In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...

  12. Imputation of Housing Rents for Owners Using Models With Heckman Correction

    Directory of Open Access Journals (Sweden)

    Beat Hulliger

    2012-07-01

    Full Text Available The direct income of owners and tenants of dwellings is not comparable since the owners have a hidden income from the investment in their dwelling. This hidden income is considered a part of the disposable income of owners. It may be predicted with the help of a linear model of the rent. Since such a model must be developed and estimated for tenants with observed market rents a selection bias may occur. The selection bias can be minimised through a Heckman correction. The paper applies the Heckman correction to data from the Swiss Statistics on Income and Living Conditions. The Heckman method is adapted to the survey context, the modeling process including the choice of covariates is explained and the effect of the prediction using the model is discussed.

  13. A Model for Assessing the Liability of Seemingly Correct Software

    Science.gov (United States)

    Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.

    1991-01-01

    Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.

  14. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  15. Threshold corrections and gauge symmetry in twisted superstring models

    International Nuclear Information System (INIS)

    Pierce, D.M.

    1994-01-01

    Threshold corrections to the running of gauge couplings are calculated for superstring models with free complex world sheet fermions. For two N=1 SU(2)xU(1) 5 models, the threshold corrections lead to a small increase in the unification scale. Examples are given to illustrate how a given particle spectrum can be described by models with different boundary conditions on the internal fermions. We also discuss how complex twisted fermions can enhance the symmetry group of an N=4, SU(3)xU(1)xU(1) model to the gauge group SU(3)xSU(2)xU(1). It is then shown how a mixing angle analogous to the Weinberg angle depends on the boundary conditions of the internal fermions

  16. Tracer kinetic modelling of receptor data with mathematical metabolite correction

    International Nuclear Information System (INIS)

    Burger, C.; Buck, A.

    1996-01-01

    Quantitation of metabolic processes with dynamic positron emission tomography (PET) and tracer kinetic modelling relies on the time course of authentic ligand in plasma, i.e. the input curve. The determination of the latter often requires the measurement of labelled metabilites, a laborious procedure. In this study we examined the possibility of mathematical metabolite correction, which might obviate the need for actual metabolite measurements. Mathematical metabilite correction was implemented by estimating the input curve together with kinetic tissue parameters. The general feasibility of the approach was evaluated in a Monte Carlo simulation using a two tissue compartment model. The method was then applied to a series of five human carbon-11 iomazenil PET studies. The measured cerebral tissue time-activity curves were fitted with a single tissue compartment model. For mathematical metabolite correction the input curve following the peak was approximated by a sum of three decaying exponentials, the amplitudes and characteristic half-times of which were then estimated by the fitting routine. In the simulation study the parameters used to generate synthetic tissue time-activity curves (K 1 -k 4 ) were refitted with reasonable identifiability when using mathematical metabolite correciton. Absolute quantitation of distribution volumes was found to be possible provided that the metabolite and the kinetic models are adequate. If the kinetic model is oversimplified, the linearity of the correlation between true and estimated distribution volumes is still maintained, although the linear regression becomes dependent on the input curve. These simulation results were confirmed when applying mathematical metabolite correction to the 11 C iomazenil study. Estimates of the distribution volume calculated with a measured input curve were linearly related to the estimates calculated using mathematical metabolite correction with correlation coefficients >0.990. (orig./MG)

  17. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  18. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  19. Oblique corrections in the Dine-Fischler-Srednicki axion model

    Directory of Open Access Journals (Sweden)

    Katanaeva Alisa

    2016-01-01

    We discuss the Dine-Fischler-Srednicki (DFS model, which extends the two-Higgs doublet model with an additional Peccei-Quinn symmetry and leads to a physically acceptable axion. The non-linear parametrization of the DFS model is exploited in the generic case where all scalars except the lightest Higgs and the axion have masses at or beyond the TeV scale. We compute the oblique corrections and use their values from the electroweak experimental fits to put constraints on the mass spectrum of the DFS model.

  20. Parton distribution functions with QED corrections in the valon model

    Science.gov (United States)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  1. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Unde...... versions that are simple to compute. A simulation study shows that the finite-sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full......We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...

  2. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  3. Using modeling to develop and evaluate a corrective action system

    International Nuclear Information System (INIS)

    Rodgers, L.

    1995-01-01

    At a former trucking facility in EPA Region 4, a corrective action system was installed to remediate groundwater and soil contaminated with gasoline and fuel oil products released from several underground storage tanks (USTs). Groundwater modeling was used to develop the corrective action plan and later used with soil vapor modeling to evaluate the systems effectiveness. Groundwater modeling was used to determine the effects of a groundwater recovery system on the water table at the site. Information gathered during the assessment phase was used to develop a three dimensional depiction of the subsurface at the site. Different groundwater recovery schemes were then modeled to determine the most effective method for recovering contaminated groundwater. Based on the modeling and calculations, a corrective action system combining soil vapor extraction (SVE) and groundwater recovery was designed. The system included seven recovery wells, to extract both soil vapor and groundwater, and a groundwater treatment system. Operation and maintenance of the system included monthly system sampling and inspections and quarterly groundwater sampling. After one year of operation the effectiveness of the system was evaluated. A subsurface soil gas model was used to evaluate the effects of the SVE system on the site contamination as well as its effects on the water table and groundwater recovery operations. Groundwater modeling was used in evaluating the effectiveness of the groundwater recovery system. Plume migration and capture were modeled to insure that the groundwater recovery system at the site was effectively capturing the contaminant plume. The two models were then combined to determine the effects of the two systems, acting together, on the remediation process

  4. Color Fringe Correction by the Color Difference Prediction Using the Logistic Function.

    Science.gov (United States)

    Jang, Dong-Won; Park, Rae-Hong

    2017-05-01

    This paper proposes a new color fringe correction method that preserves the object color well by the color difference prediction using the logistic function. We observe two characteristics between normal edge (NE) and degraded edge (DE) due to color fringe: 1) the DE has relatively smaller R-G and B-G correlations than the NE and 2) the color difference in the NE can be fitted by the logistic function. The proposed method adjusts the color difference of the DE to the logistic function by maximizing the R-G and B-G correlations in the corrected color fringe image. The generalized logistic function with four parameters requires a high computational load to select the optimal parameters. In experiments, a one-parameter optimization can correct color fringe gracefully with a reduced computational load. Experimental results show that the proposed method restores well the original object color in the DE, whereas existing methods give monochromatic or distorted color.

  5. Three-Phase Text Error Correction Model for Korean SMS Messages

    Science.gov (United States)

    Byun, Jeunghyun; Park, So-Young; Lee, Seung-Wook; Rim, Hae-Chang

    In this paper, we propose a three-phase text error correction model consisting of a word spacing error correction phase, a syllable-based spelling error correction phase, and a word-based spelling error correction phase. In order to reduce the text error correction complexity, the proposed model corrects text errors step by step. With the aim of correcting word spacing errors, spelling errors, and mixed errors in SMS messages, the proposed model tries to separately manage the word spacing error correction phase and the spelling error correction phase. For the purpose of utilizing both the syllable-based approach covering various errors and the word-based approach correcting some specific errors accurately, the proposed model subdivides the spelling error correction phase into the syllable-based phase and the word-based phase. Experimental results show that the proposed model can improve the performance by solving the text error correction problem based on the divide-and-conquer strategy.

  6. Utilization of multiple read heads for TMR prediction and correction in bit-patterned media recording

    Directory of Open Access Journals (Sweden)

    W. Busyatras

    2017-05-01

    Full Text Available This paper proposes a utilization of multiple read heads to predict and correct a track mis-registration (TMR in bit-patterned media recording (BPMR based on the readback signals. We propose to use the signal energy ratio between the upper and lower tracks from multiple read heads to estimate the TMR level. Then, a pair of two-dimensional (2D target and its corresponding 2D equalizer associated with the estimated TMR will be chosen to correct the TMR in the data detection process. Numerical results show that the proposed system can achieve a very high accuracy of TMR prediction, thus performing better than the conventional system, especially when TMR is severe.

  7. Region of validity of the Thomas–Fermi model with quantum, exchange and shell corrections

    International Nuclear Information System (INIS)

    Dyachkov, S A; Levashov, P R; Minakov, D V

    2016-01-01

    A novel approach to calculate thermodynamically consistent shell corrections in wide range of parameters is used to predict the region of validity of the Thomas-Fermi approach. Calculated thermodynamic functions of electrons at high density are consistent with the more precise density functional theory. It makes it possible to work out a semi-classical model applicable both at low and high density. (paper)

  8. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  9. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  10. The Imaginary Starobinsky Model and Higher Curvature Corrections

    CERN Document Server

    Ferrara, Sergio; Riotto, Antonio

    2015-01-01

    We elaborate on the predictions of the imaginary Starobinsky model of inflation coupled to matter, where the inflaton is identified with the imaginary part of the inflaton multiplet suggested by the Supergravity embedding of a pure R + R^2 gravity. In particular, we study the impact of higher-order curvature terms and show that, depending on the parameter range, one may find either a quadratic model of chaotic inflation or monomial models of chaotic inflation with fractional powers between 1 and 2.

  11. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines......-parameter models with respect to the prediction of the maximum response during excitation and the geometrical damping related to free vibrations of a footing....

  12. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  13. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  14. Emergence of spacetime dynamics in entropy corrected and braneworld models

    International Nuclear Information System (INIS)

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E.

    2013-01-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N sur −N bulk ). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N sur , using the general entropy corrected formula S = A/(4L p 2 )+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal

  15. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Directory of Open Access Journals (Sweden)

    J. Liebert

    2012-09-01

    Full Text Available Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with

  16. Genome Editing of Structural Variations: Modeling and Gene Correction.

    Science.gov (United States)

    Park, Chul-Yong; Sung, Jin Jea; Kim, Dong-Wook

    2016-07-01

    The analysis of chromosomal structural variations (SVs), such as inversions and translocations, was made possible by the completion of the human genome project and the development of genome-wide sequencing technologies. SVs contribute to genetic diversity and evolution, although some SVs can cause diseases such as hemophilia A in humans. Genome engineering technology using programmable nucleases (e.g., ZFNs, TALENs, and CRISPR/Cas9) has been rapidly developed, enabling precise and efficient genome editing for SV research. Here, we review advances in modeling and gene correction of SVs, focusing on inversion, translocation, and nucleotide repeat expansion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  18. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  19. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  20. The imaginary Starobinsky model and higher curvature corrections

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kehagias, Alex; Riotto, Antonio

    2015-01-01

    We elaborate on the predictions of the imaginary Starobinsky model of inflation coupled to matter, where the inflaton is identified with the imaginary part of the inflaton multiplet suggested by the Supergravity embedding of a pure R + R 2 gravity. In particular, we study the impact of higher-order curvature terms and show that, depending on the parameter range, one may find either a quadratic model of chaotic inflation or monomial models of chaotic inflation with fractional powers between 1 and 2. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  1. Predictive models reduce talent development costs in female gymnastics.

    Science.gov (United States)

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  2. One-loop radiative correction to the triple Higgs coupling in the Higgs singlet model

    Directory of Open Access Journals (Sweden)

    Shi-Ping He

    2017-01-01

    Full Text Available Though the 125 GeV Higgs boson is consistent with the standard model (SM prediction until now, the triple Higgs coupling can deviate from the SM value in the physics beyond the SM (BSM. In this paper, the radiative correction to the triple Higgs coupling is calculated in the minimal extension of the SM by adding a real gauge singlet scalar. In this model there are two scalars h and H and both of them are mixing states of the doublet and singlet. Provided that the mixing angle is set to be zero, namely the SM limit, h is the pure left-over of the doublet and its behavior is the same as that of the SM at the tree level. However the loop corrections can alter h-related couplings. In this SM limit case, the effect of the singlet H may show up in the h-related couplings, especially the triple h coupling. Our numerical results show that the deviation is sizable. For λΦS=1 (see text for the parameter definition, the deviation δhhh(1 can be 40%. For λΦS=1.5, the δhhh(1 can reach 140%. The sizable radiative correction is mainly caused by three reasons: the magnitude of the coupling λΦS, light mass of the additional scalar and the threshold enhancement. The radiative corrections for the hVV, hff couplings are from the counter-terms, which are the universal correction in this model and always at O(1%. The hZZ coupling, which can be precisely measured, may be a complementarity to the triple h coupling to search for the BSM. In the optimal case, the triple h coupling is very sensitive to the BSM physics, and this model can be tested at future high luminosity hadron colliders and electron–positron colliders.

  3. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    Science.gov (United States)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  4. Characterization, prediction, and correction of geometric distortion in 3 T MR images

    International Nuclear Information System (INIS)

    Baldwin, Lesley N.; Wachowicz, Keith; Thomas, Steven D.; Rivest, Ryan; Gino Fallone, B.

    2007-01-01

    The work presented herein describes our methods and results for predicting, measuring and correcting geometric distortions in a 3 T clinical magnetic resonance (MR) scanner for the purpose of image guidance in radiation treatment planning. Geometric inaccuracies due to both inhomogeneities in the background field and nonlinearities in the applied gradients were easily visualized on the MR images of a regularly structured three-dimensional (3D) grid phantom. From a computed tomography scan, the locations of just under 10 000 control points within the phantom were accurately determined in three dimensions using a MATLAB-based computer program. MR distortion was then determined by measuring the corresponding locations of the control points when the phantom was imaged using the MR scanner. Using a reversed gradient method, distortions due to gradient nonlinearities were separated from distortions due to inhomogeneities in the background B 0 field. Because the various sources of machine-related distortions can be individually characterized, distortions present in other imaging sequences (for which 3D distortion cannot accurately be measured using phantom methods) can be predicted negating the need for individual distortion calculation for a variety of other imaging sequences. Distortions were found to be primarily caused by gradient nonlinearities and maximum image distortions were reported to be less than those previously found by other researchers at 1.5 T. Finally, the image slices were corrected for distortion in order to provide geometrically accurate phantom images

  5. The Innsbruck/ESO sky models and telluric correction tools*

    Directory of Open Access Journals (Sweden)

    Kimeswenger S.

    2015-01-01

    While the ground based astronomical observatories just have to correct for the line-of-sight integral of these effects, the Čerenkov telescopes use the atmosphere as the primary detector. The measured radiation originates at lower altitudes and does not pass through the entire atmosphere. Thus, a decent knowledge of the profile of the atmosphere at any time is required. The latter cannot be achieved by photometric measurements of stellar sources. We show here the capabilities of our sky background model and data reduction tools for ground-based optical/infrared telescopes. Furthermore, we discuss the feasibility of monitoring the atmosphere above any observing site, and thus, the possible application of the method for Čerenkov telescopes.

  6. Modeling Approach/Strategy for Corrective Action Unit 97, Yucca Flat and Climax Mine , Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Janet Willie

    2003-08-01

    The objectives of the UGTA corrective action strategy are to predict the location of the contaminant boundary for each CAU, develop and implement a corrective action, and close each CAU. The process for achieving this strategy includes modeling to define the maximum extent of contaminant transport within a specified time frame. Modeling is a method of forecasting how the hydrogeologic system, including the underground test cavities, will behave over time with the goal of assessing the migration of radionuclides away from the cavities and chimneys. Use of flow and transport models to achieve the objectives of the corrective action strategy is specified in the FFACO. In the Yucca Flat/Climax Mine system, radionuclide migration will be governed by releases from the cavities and chimneys, and transport in alluvial aquifers, fractured and partially fractured volcanic rock aquifers and aquitards, the carbonate aquifers, and in intrusive units. Additional complexity is associated with multiple faults in Yucca Flat and the need to consider reactive transport mechanisms that both reduce and enhance the mobility of radionuclides. A summary of the data and information that form the technical basis for the model is provided in this document.

  7. Physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Stefan Leger

    2017-10-01

    Conclusion: The proposed PCM algorithm led to a significantly improved image quality compared to the originally acquired images, suggesting that it is applicable to the correction of MRI data. Thus it may help to reduce intensity non-uniformity which is an important step for advanced image analysis.

  8. Structure Corrections in Modeling VLBI Delays for RDV Data

    Science.gov (United States)

    Sovers, Ojars J.; Charlot, Patrick; Fey, Alan L.; Gordon, David

    2002-01-01

    Since 1997, bimonthly S- and X-band observing sessions have been carried out employing the VLBA (Very Long Baseline Array) and as many as ten additional antennas. Maps of the extended structures have been generated for the 160 sources observed in ten of these experiments (approximately 200,000 observations) taking place during 1997 and 1998. This paper reports the results of the first massive application of such structure maps to correct the modeled VLBI (Very Long Baseline Interferometry) delay in astrometric data analysis. For high-accuracy celestial reference frame work, proper choice of a reference point within each extended source is crucial. Here the reference point is taken at the point of maximum emitted flux. Overall, the weighted delay residuals (approximately equal to 30 ps) are reduced by 8 ps in quadrature upon introducing source maps to model the structure delays of the sources. Residuals of some sources with extended or fast-varying structures improve by as much as 40 ps. Scatter of 'arc positions' about a time-linear model decreases substantially for most sources. Based on our results, it is also concluded that source structure is presently not the dominant error source in astrometric/geodetic VLBI.

  9. Real time prediction and correction of ADCS problems in LEO satellites using fuzzy logic

    Directory of Open Access Journals (Sweden)

    Yassin Mounir Yassin

    2017-06-01

    Full Text Available This approach is concerned with adapting the operations of attitude determination and control subsystem (ADCS of low earth orbit LEO satellites through analyzing the telemetry readings received by mission control center, and then responding to ADCS off-nominal situations. This can be achieved by sending corrective operational Tele-commands within real time. Our approach is related to the fuzzy membership of off-nominal telemetry readings of corrective actions through a set of fuzzy rules based on understanding the ADCS modes resulted from the satellite telemetry readings. Response in real time gives us a chance to avoid risky situations. The approach is tested on the EgyptSat-1 engineering model, which is our method to simulate the results.

  10. Impacts of Earth rotation parameters on GNSS ultra-rapid orbit prediction: Derivation and real-time correction

    Science.gov (United States)

    Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto

    2017-12-01

    Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50

  11. CERN Large Hadron Collider optics model, measurements, and corrections

    Directory of Open Access Journals (Sweden)

    R. Tomás

    2010-12-01

    Full Text Available Optics stability during all phases of operation is crucial for the LHC. Tools and procedures have been developed for rapid checks of beta beating, dispersion, and linear coupling, as well as for prompt optics corrections. Important optics errors during the different phases of the beam commissioning were observed and locally corrected using the segment-by-segment technique. The most relevant corrections at injection have been corroborated with dedicated magnetic measurements.

  12. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  13. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology...

  14. Loop Corrections in Very Special Relativity Standard Model

    Science.gov (United States)

    Alfaro, Jorge

    2018-01-01

    In this talk we want to study one-loop corrections in VSRSM. In particular, we use the new Sim(2)-invariant dimensional regularization to compute one-loop corrections to the Effective Action in the subsector of the VSRSM that describe the interaction of photons with charged leptons. New stringent bounds for the masses of ve and vµ are obtained.

  15. Coaching, Not Correcting: An Alternative Model for Minority Students

    Science.gov (United States)

    Dresser, Rocío; Asato, Jolynn

    2014-01-01

    The debate on the role of oral corrective feedback or "repair" in English instruction settings has been going on for over 30 years. Some educators believe that oral grammar correction is effective because they have noticed that students who learned a set of grammar rules were more likely to use them in real life communication (Krashen,…

  16. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  17. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  18. Correction to scaling in the response function of the two-dimensional kinetic Ising model

    Science.gov (United States)

    Corberi, Federico; Lippiello, Eugenio; Zannetti, Marco

    2005-11-01

    The aging part Rag(t,s) of the impulsive response function of the two-dimensional ferromagnetic Ising model, quenched below the critical point, is studied numerically employing an algorithm without the imposition of the external field. We find that the simple scaling form Rag(t,s)=s-(1+a)f(t/s) , which is usually believed to hold in the aging regime, is not obeyed. We analyze the data assuming the existence of a correction to scaling. We find a=0.273±0.006 , in agreement with previous numerical results obtained from the zero field cooled magnetization. We investigate in detail also the scaling function f(t/s) and we compare the results with the predictions of analytical theories. We make an ansatz for the correction to scaling, deriving an analytical expression for Rag(t,s) . This gives a satisfactory qualitative agreement with the numerical data for Rag(t,s) and for the integrated response functions. With the analytical model we explore the overall behavior, extrapolating beyond the time regime accessible with the simulations. We explain why the data for the zero field cooled susceptibility are not too sensitive to the existence of the correction to scaling in Rag(t,s) , making this quantity the most convenient for the study of the asymptotic scaling properties.

  19. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    Science.gov (United States)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model

  20. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  1. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  2. Seasonal predictions of equatorial Atlantic SST in a low-resolution CGCM with surface heat flux correction

    Science.gov (United States)

    Dippe, Tina; Greatbatch, Richard; Ding, Hui

    2016-04-01

    The dominant mode of interannual variability in tropical Atlantic sea surface temperatures (SSTs) is the Atlantic Niño or Zonal Mode. Akin to the El Niño-Southern Oscillation in the Pacific sector, it is able to impact the climate both of the adjacent equatorial African continent and remote regions. Due to heavy biases in the mean state climate of the equatorial-to-subtropical Atlantic, however, most state-of-the-art coupled global climate models (CGCMs) are unable to realistically simulate equatorial Atlantic variability. In this study, the Kiel Climate Model (KCM) is used to investigate the impact of a simple bias alleviation technique on the predictability of equatorial Atlantic SSTs. Two sets of seasonal forecasting experiments are performed: An experiment using the standard KCM (STD), and an experiment with additional surface heat flux correction (FLX) that efficiently removes the SST bias from simulations. Initial conditions for both experiments are generated by the KCM run in partially coupled mode, a simple assimilation technique that forces the KCM with observed wind stress anomalies and preserves SST as a fully prognostic variable. Seasonal predictions for both sets of experiments are run four times yearly for 1981-2012. Results: Heat flux correction substantially improves the simulated variability in the initialization runs for boreal summer and fall (June-October). In boreal spring (March-May), however, neither the initialization runs of the STD or FLX-experiments are able to capture the observed variability. FLX-predictions show no consistent enhancement of skill relative to the predictions of the STD experiment over the course of the year. The skill of persistence forecasts is hardly beat by either of the two experiments in any season, limiting the usefulness of the few forecasts that show significant skill. However, FLX-forecasts initialized in May recover skill in July and August, the peak season of the Atlantic Niño (anomaly correlation

  3. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  4. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  5. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  6. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  7. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  8. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  9. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  10. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  11. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  12. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  13. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  14. Determining Model Correctness for Situations of Belief Fusion

    Science.gov (United States)

    2013-07-01

    cinema together it would seem strange to say that one specific movie is more true than another. However, in this case the term truth can interpreted in...which means that no state is correct. An example is when two persons try to agree on seeing a movie at the cinema . If their preferences include some

  15. Modeling Dynamics of Wikipedia: An Empirical Analysis Using a Vector Error Correction Model

    Directory of Open Access Journals (Sweden)

    Liu Feng-Jun

    2017-01-01

    Full Text Available In this paper, we constructed a system dynamic model of Wikipedia based on the co-evolution theory, and investigated the interrelationships among topic popularity, group size, collaborative conflict, coordination mechanism, and information quality by using the vector error correction model (VECM. This study provides a useful framework for analyzing the dynamics of Wikipedia and presents a formal exposition of the VECM methodology in the information system research.

  16. Prediction of ultrasonic probe characteristics through modeling and simulation

    International Nuclear Information System (INIS)

    Amry Amin Abas; Mohamad Pauzi Ismail; Suhairy Sani

    2004-01-01

    One of the main component in an ultrasonic probe is piezoelectric material. It converts electrical energy supplied to it into mechanical energy (i.e. sound waves) and vice versa. In industrial application, the characteristic of ultrasonic probes is important as it will affect the results obtained. The probes fabricated must possess the characteristics suitable to the intended application. Through modeling and simulation, we can predict the characteristics of the probes. Mason equivalent circuit is used to make a model and simulation of the probes. In this model, the probes will be treated and simplified as a one dimensional electrical line. From simulation, the electrical properties such as impedance, operating frequency bandwidth and others can be predicted. From this model, the correct material to be used for actual probe construction can be obtained. The limitation of this method is details such as bond line between layers is not taken into consideration. (Author)

  17. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  18. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  19. Quantitative Structure-activity Relationship (QSAR) Models for Docking Score Correction.

    Science.gov (United States)

    Fukunishi, Yoshifumi; Yamasaki, Satoshi; Yasumatsu, Isao; Takeuchi, Koh; Kurosawa, Takashi; Nakamura, Haruki

    2017-01-01

    In order to improve docking score correction, we developed several structure-based quantitative structure activity relationship (QSAR) models by protein-drug docking simulations and applied these models to public affinity data. The prediction models used descriptor-based regression, and the compound descriptor was a set of docking scores against multiple (∼600) proteins including nontargets. The binding free energy that corresponded to the docking score was approximated by a weighted average of docking scores for multiple proteins, and we tried linear, weighted linear and polynomial regression models considering the compound similarities. In addition, we tried a combination of these regression models for individual data sets such as IC 50 , K i , and %inhibition values. The cross-validation results showed that the weighted linear model was more accurate than the simple linear regression model. Thus, the QSAR approaches based on the affinity data of public databases should improve docking scores. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  20. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  1. Predictive factors for perioperative blood transfusion in surgeries for correction of idiopathic, neuromuscular or congenital scoliosis

    Directory of Open Access Journals (Sweden)

    Alexandre Fogaça Cristante

    2014-12-01

    Full Text Available OBJECTIVE: To evaluate the association of clinical and demographic variables in patients requiring blood transfusion during elective surgery to treat scoliosis with the aim of identifying markers predictive of the need for blood transfusion. METHODS: Based on the review of medical charts at a public university hospital, this retrospective study evaluated whether the following variables were associated with the need for red blood cell transfusion (measured by the number of packs used during scoliosis surgery: scoliotic angle, extent of arthrodesis (number of fused levels, sex of the patient, surgery duration and type of scoliosis (neuromuscular, congenital or idiopathic. RESULTS: Of the 94 patients evaluated in a 55-month period, none required a massive blood transfusion (most patients needed less than two red blood cell packs. The number of packs was not significantly associated with sex or type of scoliosis. The extent of arthrodesis (r = 0.103, surgery duration (r = 0.144 and scoliotic angle (r = 0.004 were weakly correlated with the need for blood transfusion. Linear regression analysis showed an association between the number of spine levels submitted to arthrodesis and the volume of blood used in transfusions (p = 0.001. CONCLUSION: This study did not reveal any evidence of a significant association between the need for red blood cell transfusion and scoliotic angle, sex or surgery duration in scoliosis correction surgery. Submission of more spinal levels to arthrodesis was associated with the use of a greater number of blood packs.

  2. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  3. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  4. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  5. A 3D correction method for predicting the readings of a PinPoint chamber on the CyberKnife® M6™ machine

    Science.gov (United States)

    Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.

    2018-02-01

    The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also

  6. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  7. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  8. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Wind Tunnel Corrections for High Angle of Attack Models,

    Science.gov (United States)

    1981-02-01

    presentees en 1972 par VAYSSAIRE C9"l ’. Un bon accord ötait obtmu entre les r£sultats corrigäs d’une maquette d1avion de transport ä fleche de...et pour deux incidences de 1.0 et 32 degr£s, la derniere en presence d’un dfccollement prononcd, les courbes presentees sort is si es des signatures...a check on the validity of the commonly used linearized corrections due to Glauert. This was motivated also by the results of earlier calculations

  20. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  1. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    Science.gov (United States)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  2. Megavoltage photon beam attenuation by carbon fiber couch tops and its prediction using correction factors

    International Nuclear Information System (INIS)

    Hayashi, Naoki; Shibamoto, Yuta; Obata, Yasunori; Kimura, Takashi; Nakazawa, Hisato; Hagiwara, Masahiro; Hashizume, Chisa I.; Mori, Yoshimasa; Kobayashi, Tatsuya

    2010-01-01

    The purpose of this study was to evaluate the effect of megavoltage photon beam attenuation (PBA) by couch tops and to propose a method for correction of PBA. Four series of phantom measurements were carried out. First, PBA by the exact couch top (ECT, Varian) and Imaging Couch Top (ICT, BrainLAB) was evaluated using a water-equivalent phantom. Second, PBA by Type-S system (Med-Tec), ECT and ICT was compared with a spherical phantom. Third, percentage depth dose (PDD) after passing through ICT was measured to compare with control data of PDD. Forth, the gantry angle dependency of PBA by ICT was evaluated. Then, an equation for PBA correction was elaborated and correction factors for PBA at isocenter were obtained. Finally, this method was applied to a patient with hepatoma. PBA of perpendicular beams by ICT was 4.7% on average. With the increase in field size, the measured values became higher. PBA by ICT was greater than that by Type-S system and ECT. PBA increased significantly as the angle of incidence increased, ranging from 4.3% at 180 deg to 11.2% at 120 deg. Calculated doses obtained by the equation and correction factors agreed quite well with the measured doses between 120 deg and 180 deg of angles of incidence. Also in the patient, PBA by ICT was corrected quite well by the equation and correction factors. In conclusion, PBA and its gantry angle dependency by ICT were observed. This simple method using the equation and correction factors appeared useful to correct the isocenter dose when the PBA effect cannot be corrected by a treatment planning system. (author)

  3. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  4. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  5. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  6. Balancing of a rigid rotor using artificial neural network to predict the correction masses - DOI: 10.4025/actascitechnol.v31i2.3912

    Directory of Open Access Journals (Sweden)

    Fábio Lúcio Santos

    2009-06-01

    Full Text Available This paper deals with an analytical model of a rigid rotor supported by hydrodynamic journal bearings where the plane separation technique together with the Artificial Neural Network (ANN is used to predict the location and magnitude of the correction masses for balancing the rotor bearing system. The rotating system is modeled by applying the rigid shaft Stodola-Green model, in which the shaft gyroscopic moments and rotatory inertia are accounted for, in conjunction with the hydrodynamic cylindrical journal bearing model based on the classical Reynolds equation. A linearized perturbation procedure is employed to render the lubrication equations from the Reynolds equation, which allows predicting the eight linear force coefficients associated with the bearing direct and cross-coupled stiffness and damping coefficients. The results show that the methodology presented is efficient for balancing rotor systems. This paper gives a step further in the monitoring process, since Artificial Neural Network is normally used to predict, not to correct the mass unbalance. The procedure presented can be used in turbo machinery industry to balance rotating machinery that require continuous inspections. Some simulated results will be used in order to clarify the methodology presented.

  7. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  8. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  9. On-line core monitoring system based on buckling corrected modified one group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.

    2011-01-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  10. Online Prediction under Model Uncertainty Via Dynamic Model Averaging: Application to a Cold Rolling Mill

    National Research Council Canada - National Science Library

    Raftery, Adrian E; Karny, Miroslav; Andrysek, Josef; Ettler, Pavel

    2007-01-01

    ... is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the (correct...

  11. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  12. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  13. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  14. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  15. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  16. Using a Teaching Model To Correct Known Misconceptions in Electrochemistry.

    Science.gov (United States)

    Huddle, Penelope Ann; White, Margaret Dawn; Rogers, Fiona

    2000-01-01

    Describes a concrete teaching model designed to eliminate students' misconceptions about current flow in electrochemistry. The model uses a semi-permeable membrane rather than a salt bridge to complete the circuit and demonstrate the maintenance of cell neutrality. Concludes that use of the model led to improvement in students' understanding at…

  17. Robust Inference of Population Structure for Ancestry Prediction and Correction of Stratification in the Presence of Relatedness

    Science.gov (United States)

    Conomos, Matthew P.; Miller, Mike; Thornton, Timothy

    2016-01-01

    Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multi-dimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using ten (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. PMID:25810074

  18. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  19. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  20. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...

  1. Update of the corrective model for Jason-1 DORIS data in relation to the South Atlantic Anomaly and a corrective model for SPOT-5

    Science.gov (United States)

    Capdeville, Hugues; Štěpánek, Petr; Hecker, Louis; Lemoine, Jean-Michel

    2016-12-01

    After recalling the principle of the Jason-1 data corrective model in relation to the South Atlantic Anomaly (SAA) developed by Lemoine and Capdeville (2006), we present a model update which takes into account the orbit changes and the recent DORIS data. We propose also here a method to the International DORIS Service (IDS) Analysis Centers (ACs) in their contribution to the ITRF2014 for adding DORIS Jason-1 data into their solutions. When the Jason-1 satellite is added to the multi-satellite solution (orbit of inclination of 66° complements the polar-orbiting satellites), the stability of the geocenter Z-translation is improved (standard deviation of 11.5 mm against 16.5 mm). In a second part we take advantage of a high-energy particles dosimeter (CARMEN) on-board Jason-2 to improve the corrective model of Jason-1. We completed a correlation study showing that the CARMEN >87 MeV integrated proton flux map averaged over the period 2009-2011 is the energy band of the CARMEN maps which are the most coherent with the one obtained from Jason-1 DORIS measurements. The model based on the Jason-1 map and the one based on the CARMEN map are then compared in terms of orbit determination and station position estimation. We derive and validate a SPOT-5 data corrective model. We determine the SAA grid at the altitude of SPOT-5 from the frequency time derivative of the on-board frequency offsets and estimated the model parameters. We demonstrate the impact of the SPOT-5 data corrective model on the Precise Orbit Determination and the station position estimation from the weekly solutions, based on two individual Analysis Centers solutions, GOP (Geodetic Observatory Pecny) and GRG (Groupe de Recherche de Géodésie Spatiale). The SPOT-5 data corrective model significantly improves the Precise Orbit Determination (reduction of 1.4% in 2013 of RMS of the fit, reduction of 25% in normal direction of arc overlap RMS) and the overall statistics of the station position estimation

  2. Status of standard model predictions and uncertainties for electroweak observables

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1993-11-01

    Recent progress in theoretical predictions of electroweak parameters beyond one loop in the standard model is reviewed. The topics include universal corrections of O(G F 2 M H 2 M W 2 ), O(G F 2 m t 4 ), O(α s G F M W 2 ), and those due to virtual t anti t threshold effects, as well as specific corrections to Γ(Z → b anti b) of O(G F 2 m t 4 ), O(α s G F m t 2 ), and O(α s 2 m b 2 /M Z 2 ). An update of the hadronic contributions to Δα is presented. Theoretical uncertainties, other than those due to the lack of knowledge of M H and m t , are estimated. (orig.)

  3. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    International Nuclear Information System (INIS)

    Romero, Rodolfo H.; Gomez, Sergio S.

    2006-01-01

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown

  4. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  5. Radiative corrections for semileptonic decays of hyperons: the 'model independent' part

    International Nuclear Information System (INIS)

    Toth, K.; Szegoe, K.; Margaritis, T.

    1984-04-01

    The 'model independent' part of the order α radiative correction due to virtual photon exchanges and inner bremsstrahlung is studied for semileptonic decays of hyperons. Numerical results of high accuracy are given for the relative correction to the branching ratio, the electron energy spectrum and the (Esub(e),Esub(f)) Dalitz distribution in the case of four different decays. (author)

  6. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  7. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Bayesian Correction for Misclassification in Multilevel Count Data Models

    Directory of Open Access Journals (Sweden)

    Tyler Nelson

    2018-01-01

    Full Text Available Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.

  9. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    Science.gov (United States)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  10. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  11. Charge-burping theory correctly predicts optimal ratios of phase duration for biphasic defibrillation waveforms.

    Science.gov (United States)

    Swerdlow, C D; Fan, W; Brewer, J E

    1996-11-01

    For biphasic waveforms, it is accepted that the ratio of the duration of phase 2 to the duration of phase 1 (phase-duration ratio) should be theory postulates that the beneficial effects of phase 2 are maximal when it completely removes the charge delivered by phase 1. It predicts that the phase-duration ratio should be defibrillation system (tau s) exceeds the time constant of the cell membrane (tau m) but > 1 when tau s defibrillator capacitance and pathway resistance). In a canine model of transvenous defibrillation (n = 8), we determined stored-energy defibrillation thresholds (DFTs) for biphasic waveforms from conventional capacitors (140 microF. tau s = 7.1 +/- 0.8 ms) and very small capacitors (40 microF. tau s = 2.0 +/- 0.2 ms). Each capacitance was tested with phase-duration ratios of 0.5, 1, 2, and 3. The duration of phase 1 approximated the optimal monophasic waveform, 6.3 +/- 0.7 ms for 140-microF waveforms and 2.8 +/- 0.2 ms for 40-microF waveforms. For 140-microF waveforms, the DFT was lower for phase-duration ratios 1 (P = .0003). The reverse was true for 40-microF capacitors (P = .0008). There was a significant interaction between the effects of capacitance and phase-duration ratio on DFT (P = .0002). The lowest DFT for 40-microF waveforms was less than the lowest DFT for 140-microF waveforms (4.9 +/- 2.5 versus 6.4 +/- 2.4 J, P 1 for small capacitors. This supports the predictions of the charge-burping theory.

  12. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Science.gov (United States)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  14. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  15. Standard Model-like corrections to Dilatonic Dynamics

    DEFF Research Database (Denmark)

    Antipin, Oleg; Krog, Jens; Mølgaard, Esben

    2013-01-01

    We examine the effects of standard model-like interactions on the near-conformal dynamics of a theory featuring a dilatonic state identified with the standard model-like Higgs. As template for near-conformal dynamics we use a gauge theory with fermionic matter and elementary mesons possessing...... conformal dynamics could accommodate the observed Higgs-like properties....

  16. Observation-based correction of dynamical models using thermostats

    NARCIS (Netherlands)

    Myerscough, Keith W.; Frank, Jason; Leimkuhler, Benedict

    2017-01-01

    Models used in simulation may give accurate shortterm trajectories but distort long-Term (statistical) properties. In this work, we augment a given approximate model with a control law (a 'thermostat') that gently perturbs the dynamical system to target a thermodynamic state consistent with a set of

  17. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  18. Forecasting inter-urban transport demand for a logistics company: A combined grey–periodic extension model with remnant correction

    Directory of Open Access Journals (Sweden)

    Donghui Wang

    2015-12-01

    Full Text Available Accurately predicting short-term transport demand for an individual logistics company involved in a competitive market is critical to make short-term operation decisions. This article proposes a combined grey–periodic extension model with remnant correction to forecast the short-term inter-urban transport demand of a logistics company involved in a nationwide competitive market, showing changes in trend and seasonal fluctuations with irregular periods different to the macroeconomic cycle. A basic grey–periodic extension model of an additive pattern, namely, the main combination model, is first constructed to fit the changing trends and the featured seasonal fluctuation periods. In order to improve prediction accuracy and model adaptability, the grey model is repeatedly modelled to fit the remnant tail time series of the main combination model until prediction accuracy is satisfied. The modelling approach is applied to a logistics company engaged in a nationwide less-than-truckload road transportation business in China. The results demonstrate that the proposed modelling approach produces good forecasting results and goodness of fit, also showing good model adaptability to the analysed object in a changing macro environment. This fact makes this modelling approach an option to analyse the short-term transportation demand of an individual logistics company.

  19. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  20. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  1. Nonlinear mixed-effects modeling: individualization and prediction.

    Science.gov (United States)

    Olofsen, Erik; Dinges, David F; Van Dongen, Hans P A

    2004-03-01

    The development of biomathematical models for the prediction of fatigue and performance relies on statistical techniques to analyze experimental data and model simulations. Statistical models of empirical data have adjustable parameters with a priori unknown values. Interindividual variability in estimates of those values requires a form of smoothing. This traditionally consists of averaging observations across subjects, or fitting a model to the data of individual subjects first and subsequently averaging the parameter estimates. However, the standard errors of the parameter estimates are assessed inaccurately by such averaging methods. The reason is that intra- and inter-individual variabilities are intertwined. They can be separated by mixed-effects modeling in which model predictions are not only determined by fixed effects (usually constant parameters or functions of time) but also by random effects, describing the sampling of subject-specific parameter values from probability distributions. By estimating the parameters of the distributions of the random effects, mixed-effects models can describe experimental observations involving multiple subjects properly (i.e., yielding correct estimates of the standard errors) and parsimoniously (i.e., estimating no more parameters than necessary). Using a Bayesian approach, mixed-effects models can be "individualized" as observations are acquired that capture the unique characteristics of the individual at hand. Mixed-effects models, therefore, have unique advantages in research on human neurobehavioral functions, which frequently show large inter-individual differences. To illustrate this we analyzed laboratory neurobehavioral performance data acquired during sleep deprivation, using a nonlinear mixed-effects model. The results serve to demonstrate the usefulness of mixed-effects modeling for data-driven development of individualized predictive models of fatigue and performance.

  2. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  3. Effect of misreported family history on Mendelian mutation prediction models.

    Science.gov (United States)

    Katki, Hormuzd A

    2006-06-01

    People with familial history of disease often consult with genetic counselors about their chance of carrying mutations that increase disease risk. To aid them, genetic counselors use Mendelian models that predict whether the person carries deleterious mutations based on their reported family history. Such models rely on accurate reporting of each member's diagnosis and age of diagnosis, but this information may be inaccurate. Commonly encountered errors in family history can significantly distort predictions, and thus can alter the clinical management of people undergoing counseling, screening, or genetic testing. We derive general results about the distortion in the carrier probability estimate caused by misreported diagnoses in relatives. We show that the Bayes factor that channels all family history information has a convenient and intuitive interpretation. We focus on the ratio of the carrier odds given correct diagnosis versus given misreported diagnosis to measure the impact of errors. We derive the general form of this ratio and approximate it in realistic cases. Misreported age of diagnosis usually causes less distortion than misreported diagnosis. This is the first systematic quantitative assessment of the effect of misreported family history on mutation prediction. We apply the results to the BRCAPRO model, which predicts the risk of carrying a mutation in the breast and ovarian cancer genes BRCA1 and BRCA2.

  4. Predictive Modelling of Contagious Deforestation in the Brazilian Amazon

    Science.gov (United States)

    Rosa, Isabel M. D.; Purves, Drew; Souza, Carlos; Ewers, Robert M.

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges “bottom up”, as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated–pre- and post-PPCDAM (“Plano de Ação para Proteção e Controle do Desmatamento na Amazônia”)–the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  5. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  6. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  7. Corrected electrostatic model for dipoles adsorbed on a metal surface

    Energy Technology Data Exchange (ETDEWEB)

    Maschhoff, B.L.; Cowin, J.P. (Enviornmental and Molecular Science Laboratory, Pacific Northwest Laboratories Box 999 MS K2-14, Richland, Washington 99352 (United States))

    1994-11-01

    We present a dipole--dipole interaction model for polar molecules vertically adsorbed on a idealized metal surface in an approximate analytic form suitable for estimating the coverage dependence of the work function, binding energies, and thermal desorption activation energies. In contrast to previous treatments, we have included all contributions to the interaction energy within the dipole model, such as the internal polarization energy and the coverage dependence of the self-image interaction with the metal. We show that these can contribute significantly to the total interaction energy. We present formulae for both point and extended dipole cases.

  8. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  9. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  10. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    To obtain the proton structure function in valon model with respect to the Laguerre polynomials one needs to use an elegant and fast numerical method at LO up to NLO. Therefore, we concentrate on the Laguerre polynomials in our determinations. In the. Laguerre method [13,14], the Laguerre polynomials are defined as.

  11. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  12. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  13. Investigation of turbulence models with compressibility corrections for hypersonic boundary flows

    Directory of Open Access Journals (Sweden)

    Han Tang

    2015-12-01

    Full Text Available The applications of pressure work, pressure-dilatation, and dilatation-dissipation (Sarkar, Zeman, and Wilcox models to hypersonic boundary flows are investigated. The flat plate boundary layer flows of Mach number 5–11 and shock wave/boundary layer interactions of compression corners are simulated numerically. For the flat plate boundary layer flows, original turbulence models overestimate the heat flux with Mach number high up to 10, and compressibility corrections applied to turbulence models lead to a decrease in friction coefficients and heating rates. The pressure work and pressure-dilatation models yield the better results. Among the three dilatation-dissipation models, Sarkar and Wilcox corrections present larger deviations from the experiment measurement, while Zeman correction can achieve acceptable results. For hypersonic compression corner flows, due to the evident increase of turbulence Mach number in separation zone, compressibility corrections make the separation areas larger, thus cannot improve the accuracy of calculated results. It is unreasonable that compressibility corrections take effect in separation zone. Density-corrected model by Catris and Aupoix is suitable for shock wave/boundary layer interaction flows which can improve the simulation accuracy of the peak heating and have a little influence on separation zone.

  14. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  15. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on

  16. Correction factor to account for dispersion in sharp-interface models of terrestrial freshwater lenses and active seawater intrusion

    Science.gov (United States)

    Werner, Adrian D.

    2017-04-01

    In this paper, a recent analytical solution that describes the steady-state extent of freshwater lenses adjacent to gaining rivers in saline aquifers is improved by applying an empirical correction for dispersive effects. Coastal aquifers experiencing active seawater intrusion (i.e., seawater is flowing inland) are presented as an analogous situation to the terrestrial freshwater lens problem, although the inland boundary in the coastal aquifer situation must represent both a source of freshwater and an outlet of saline groundwater. This condition corresponds to the freshwater river in the terrestrial case. The empirical correction developed in this research applies to situations of flowing saltwater and static freshwater lenses, although freshwater recirculation within the lens is a prominent consequence of dispersive effects, just as seawater recirculates within the stable wedges of coastal aquifers. The correction is a modification of a previous dispersive correction for Ghyben-Herzberg approximations of seawater intrusion (i.e., stable seawater wedges). Comparison between the sharp interface from the modified analytical solution and the 50% saltwater concentration from numerical modelling, using a range of parameter combinations, demonstrates the applicability of both the original analytical solution and its corrected form. The dispersive correction allows for a prediction of the depth to the middle of the mixing zone within about 0.3 m of numerically derived values, at least on average for the cases considered here. It is demonstrated that the uncorrected form of the analytical solution should be used to calculate saltwater flow rates, which closely match those obtained through numerical simulation. Thus, a combination of the unmodified and corrected analytical solutions should be utilized to explore both the saltwater fluxes and lens extent, depending on the dispersiveness of the problem. The new method developed in this paper is simple to apply and offers a

  17. Predicted Versus Attained Surgical Correction of Maxillary Advancement Surgery Using Cone Beam Computed Tomography

    Science.gov (United States)

    2016-07-01

    and final soft tissue matrices, and discrepancies were measured using Geomagic Studio ®. A panel of orthodontists then subjectively assessed the...accuracy of the predictions using a visual analog scale. Results: Only 31% of predicted landmarks fell within 2 mm of the actual result. The most...Soft Tissue Simulation .................................... 18 Figure 3-9: Soft Tissue Scans in Geomagic Studio

  18. New model for burnout prediction in channels of various cross-section

    Energy Technology Data Exchange (ETDEWEB)

    Bobkov, V.P.; Kozina, N.V.; Vinogrado, V.N.; Zyatnina, O.A. [Institute of Physics and Power Engineering, Kaluga (Russian Federation)

    1995-09-01

    The model developed to predict a critical heat flux (CHF) in various channels is presented together with the results of data analysis. A model is the realization of relative method of CHF describing based on the data for round tube and on the system of correction factors. The results of data description presented here are for rectangular and triangular channels, annuli and rod bundles.

  19. Yukawa corrections from PGBs in OGTC model to the process γγ→bb-bar

    International Nuclear Information System (INIS)

    Huang Jinshu; Song Taiping; Song Haizhen; Lu gongru

    2000-01-01

    The Yukawa corrections from the pseudo-Goldstone bosons (PGBs) in the one generation technicolor (OGTC) model to the process γγ→bb-bar are calculated. The authors find the corrections from the PGBs to the cross section γγ→bb-bar are more than 10% in the certain parameter values region. The maximum of the relative corrections to the process e + e - →γγ→bb-bar may reach -51% in laser back-scattering photos mode, and is only -17.9% in Beamstrahlung photons mode. The corrections are greatly larger the contributions from the relevant particles in the standard model and the supersymmetric model. It can be considered as a signatures of finding the technicolor at the next-generation high energy photons collision

  20. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  2. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  3. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons.

    Science.gov (United States)

    Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  4. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  5. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  6. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  7. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  8. Vehicle Driving Risk Prediction Based on Markov Chain Model

    Directory of Open Access Journals (Sweden)

    Xiaoxia Xiong

    2018-01-01

    Full Text Available A driving risk status prediction algorithm based on Markov chain is presented. Driving risk states are classified using clustering techniques based on feature variables describing the instantaneous risk levels within time windows, where instantaneous risk levels are determined in time-to-collision and time-headway two-dimension plane. Multinomial Logistic models with recursive feature variable estimation method are developed to improve the traditional state transition probability estimation, which also takes into account the comprehensive effects of driving behavior, traffic, and road environment factors on the evolution of driving risk status. The “100-car” natural driving data from Virginia Tech is employed for the training and validation of the prediction model. The results show that, under the 5% false positive rate, the prediction algorithm could have high prediction accuracy rate for future medium-to-high driving risks and could meet the timeliness requirement of collision avoidance warning. The algorithm could contribute to timely warning or auxiliary correction to drivers in the approaching-danger state.

  9. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  10. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  11. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  12. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  13. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  14. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  15. Radiative corrections in a vector-tensor model

    International Nuclear Information System (INIS)

    Chishtie, F.; Gagne-Portelance, M.; Hanif, T.; Homayouni, S.; McKeon, D.G.C.

    2006-01-01

    In a recently proposed model in which a vector non-Abelian gauge field interacts with an antisymmetric tensor field, it has been shown that the tensor field possesses no physical degrees of freedom. This formal demonstration is tested by computing the one-loop contributions of the tensor field to the self-energy of the vector field. It is shown that despite the large number of Feynman diagrams in which the tensor field contributes, the sum of these diagrams vanishes, confirming that it is not physical. Furthermore, if the tensor field were to couple with a spinor field, it is shown at one-loop order that the spinor self-energy is not renormalizable, and hence this coupling must be excluded. In principle though, this tensor field does couple to the gravitational field

  16. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  17. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  18. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  19. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  20. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Real-time axial motion detection and correction for single photon emission computed tomography using a linear prediction filter

    International Nuclear Information System (INIS)

    Saba, V.; Setayeshi, S.; Ghannadi-Maragheh, M.

    2011-01-01

    We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time. (author)

  2. Corrections to scaling in random resistor networks and diluted continuous spin models near the percolation threshold.

    Science.gov (United States)

    Janssen, Hans-Karl; Stenull, Olaf

    2004-02-01

    We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).

  3. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...

  4. Importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds

    International Nuclear Information System (INIS)

    D'Alesio, U.; Leader, E.; Murgia, F.

    2010-01-01

    We show that respecting the underlying Lorentz structure in the parton model has very strong consequences. Failure to insist on the correct Lorentz covariance is responsible for the existence of contradictory results in the literature for the polarized structure function g 2 (x), whereas with the correct imposition we are able to derive the Wandzura-Wilczek relation for g 2 (x) and the target-mass corrections for polarized deep inelastic scattering without recourse to the operator product expansion. We comment briefly on the problem of threshold behavior in the presence of target-mass corrections. Careful attention to the Lorentz structure has also profound implications for the structure of the transverse momentum dependent parton densities often used in parton model treatments of hadron production, allowing the k T dependence to be derived explicitly. It also leads to stronger positivity and Soffer-type bounds than usually utilized for the collinear densities.

  5. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  8. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  9. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  10. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  11. Correctness of Protein Identifications of Bacillus subtilis Proteome with the Indication on Potential False Positive Peptides Supported by Predictions of Their Retention Times

    Directory of Open Access Journals (Sweden)

    Katarzyna Macur

    2010-01-01

    Full Text Available The predictive capability of the retention time prediction model based on quantitative structure-retention relationships (QSRR was tested. QSRR model was derived with the use of set of peptides identified with the highest scores and originated from 8 known proteins annotated as model ones. The predictive ability of the QSRR model was verified with the use of a Bacillus subtilis proteome digest after separation and identification of the peptides by LC-ESI-MS/MS. That ability was tested with three sets of testing peptides assigned to the proteins identified with different levels of confidence. First, the set of peptides identified with the highest scores achieved in the search were considered. Hence, proteins identified on the basis of more than one peptide were taken into account. Furthermore, proteins identified on the basis of just one peptide were also considered and, depending on the possessed scores, both above and below the assumed threshold, were analyzed in two separated sets. The QSRR approach was applied as the additional constraint in proteomic research verifying results of MS/MS ion search and confirming the correctness of the peptides identifications along with the indication of the potential false positives.

  12. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  13. Signal dropout correction-based ultrasound segmentation for diastolic mitral valve modeling.

    Science.gov (United States)

    Xia, Wenyao; Moore, John; Chen, Elvis C S; Xu, Yuanwei; Ginty, Olivia; Bainbridge, Daniel; Peters, Terry M

    2018-04-01

    Three-dimensional ultrasound segmentation of mitral valve (MV) at diastole is helpful for duplicating geometry and pathology in a patient-specific dynamic phantom. The major challenge is the signal dropout at leaflet regions in transesophageal echocardiography image data. Conventional segmentation approaches suffer from missing sonographic data leading to inaccurate MV modeling at leaflet regions. This paper proposes a signal dropout correction-based ultrasound segmentation method for diastolic MV modeling. The proposed method combines signal dropout correction, image fusion, continuous max-flow segmentation, and active contour segmentation techniques. The signal dropout correction approach is developed to recover the missing segmentation information. Once the signal dropout regions of TEE image data are recovered, the MV model can be accurately duplicated. Compared with other methods in current literature, the proposed algorithm exhibits lower computational cost. The experimental results show that the proposed algorithm gives competitive results for diastolic MV modeling compared with conventional segmentation algorithms, evaluated in terms of accuracy and efficiency.

  14. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  15. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  16. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  17. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  18. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  19. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  20. Kinetics of lipase-catalyzed esterification in organic media : correct model and solvent effects on parameters

    NARCIS (Netherlands)

    Janssen, A.E.M.; Sjursnes, B.J.; Vakunov, A.V.; Halling, P.J.

    1999-01-01

    The Ping-Pong model (incl. alcohol inhibition) is not the correct model to describe the kinetics of a lipase-catalyzed esterification reaction. The first product, water, is always present at the start of the reaction. This leads to an equation with one extra parameter. This new equation fits our

  1. Reliability Analysis of a Composite Blade Structure Using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian

    2010-01-01

    This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...

  2. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  3. One loop electro-weak radiative corrections in the standard model

    International Nuclear Information System (INIS)

    Kalyniak, P.; Sundaresan, M.K.

    1987-01-01

    This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied

  4. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  5. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  6. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  7. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  8. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  9. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  10. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  11. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  12. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  13. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. A statistical bias correction for climate model data: parameter sensitivity analysis.

    Science.gov (United States)

    Piani, C.; Coppola, E.; Mariotti, L.; Haerter, J.; Hagemann, S.

    2009-04-01

    Water management adaptation strategies depend crucially on high quality projections of the hydrological cycle in view of anthropogenic climate change. The goodness of hydrological cycle projections depends, in turn, on the successful coupling of hydrological models to global (GCMs) or regional climate models (RCMs). It is well known within the climate modelling community that hydrological forcing output from climate models, in particular precipitation, is partially affected by large bias. The bias affects all aspects of the statistics, that is the mean, standard deviation (variability), skewness (drizzle versus intense events, dry days) etc. The state-of-the-art approach to bias correction is based on histogram equalization techniques. Such techniques intrinsically correct all moments of the statistical intensity distribution. However these methods are applicable to hydrological projections to the extent that the correction itself is robust, that is, defined by few parameters that are well constrained by available data and constant in time. Here we present details of the statistical bias correction methodology developed within the European project "Water and Global Change" (WATCH). We will suggest different versions of the method that allow it to be taylored to differently structured biases from different RCMs. Crucially, application of the methodology also allows for a sensitivity analysis of the correction parameters on other gridded variables such as orography and land use. Here we explore some of these sensitivities as well.

  15. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  16. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    Science.gov (United States)

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a

  17. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  18. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  19. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  20. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  1. A Global Model for Regional Phase Amplitude Prediction

    Science.gov (United States)

    Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Yang, X.; Ballard, S.; Rautian, T. G.

    2013-12-01

    We use two-dimensional (2-D) models of regional phase attenuation, and absolute site effects, to predict amplitudes for use in high frequency discrimination and yield estimation schemes. We have shown that 2-D corrections reduce scatter in P/S ratios, thus improve discrimination power. This is especially important for intermediate frequencies (2-6 Hz), which travel further than the higher frequencies that are typically used for discrimination. Previous work has focused on national priorities; however, for use by the international community, attenuation and site models must cover as much of the globe as possible. New amplitude quality control (QC) methods facilitate this effort. The most important step is to cluster events spatially, take ratios to remove path and site effects, and require the relative amplitudes to match predictions from an earthquake source model with variable moment and corner frequency. Data can then be stacked to form summary amplitudes for each cluster. We perform similar QC and stacking operations for multiple channels at each station, and for closely spaced stations. Data are inverted using a simultaneous multi-band, multi-phase approach that employs absolute spectral constraints on well-studied earthquakes. Global parameterization is obtained using publically available GeoTess software that allows for variable grid spacing. Attenuation results show remarkable, high-resolution correlation with regional geology and heat flow. Our data set includes regional explosion amplitudes from many sources, including LLNL and Leo Brady data for North America, and Borovoye Archive and ChISS data for Asia. We see dramatic improvement in high frequency P/S discrimination, world wide, after correcting for 2-D path and site effects.

  2. Government spending in education and economic growth in Cameroon:a Vector error Correction Model approach

    OpenAIRE

    Douanla Tayo, Lionel; Abomo Fouda, Marcel Olivier

    2015-01-01

    This study aims at assessing the effect of government spending in education on economic growth in Cameroon over the period 1980-2012 using a vector error correction model. The estimated results show that these expenditures had a significant and positive impact on economic growth both in short and long run. The estimated error correction model shows that an increase of 1% of the growth rate of private gross fixed capital formation and government education spending led to increases of 5.03% a...

  3. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  4. Study on modeling of resist heating effect correction in EB mask writer EBM-9000

    Science.gov (United States)

    Nomura, Haruyuki; Kamikubo, Takashi; Suganuma, Mizuna; Kato, Yasuo; Yashima, Jun; Nakayamada, Noriaki; Anze, Hirohito; Ogasawara, Munehiro

    2015-07-01

    Resist heating effect which is caused in electron beam lithography by rise in substrate temperature of a few tens or hundreds of degrees changes resist sensitivity and leads to degradation of local critical dimension uniformity (LCDU). Increasing writing pass count and reducing dose per pass is one way to avoid the resist heating effect, but it worsens writing throughput. As an alternative way, NuFlare Technology is developing a heating effect correction system which corrects CD deviation induced by resist heating effect and mitigates LCDU degradation even in high dose per pass conditions. Our developing correction model is based on a dose modulation method. Therefore, a kind of conversion equation to modify the dose corresponding to CD change by temperature rise is necessary. For this purpose, a CD variation model depending on local pattern density was introduced and its validity was confirmed by experiments and temperature simulations. And then the dose modulation rate which is a parameter to be used in the heating effect correction system was defined as ideally irrelevant to the local pattern density, and the actual values were also determined with the experimental results for several resist types. The accuracy of the heating effect correction was also discussed. Even when deviations depending on the pattern density slightly remains in the dose modulation rates (i.e., not ideal in actual), the estimated residual errors in the correction are sufficiently small and acceptable for practical 2 pass writing with the constant dose modulation rates. In these results, it is demonstrated that the CD variation model is effective for the heating effect correction system.

  5. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  6. Prediction of CT Substitutes from MR Images Based on Local Diffeomorphic Mapping for Brain PET Attenuation Correction.

    Science.gov (United States)

    Wu, Yao; Yang, Wei; Lu, Lijun; Lu, Zhentai; Zhong, Liming; Huang, Meiyan; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2016-10-01

    Attenuation correction is important for PET reconstruction. In PET/MR, MR intensities are not directly related to attenuation coefficients that are needed in PET imaging. The attenuation coefficient map can be derived from CT images. Therefore, prediction of CT substitutes from MR images is desired for attenuation correction in PET/MR. This study presents a patch-based method for CT prediction from MR images, generating attenuation maps for PET reconstruction. Because no global relation exists between MR and CT intensities, we propose local diffeomorphic mapping (LDM) for CT prediction. In LDM, we assume that MR and CT patches are located on 2 nonlinear manifolds, and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Locality is important in LDM and is constrained by the following techniques. The first is local dictionary construction, wherein, for each patch in the testing MR image, a local search window is used to extract patches from training MR/CT pairs to construct MR and CT dictionaries. The k-nearest neighbors and an outlier detection strategy are then used to constrain the locality in MR and CT dictionaries. Second is local linear representation, wherein, local anchor embedding is used to solve MR dictionary coefficients when representing the MR testing sample. Under these local constraints, dictionary coefficients are linearly transferred from the MR manifold to the CT manifold and used to combine CT training samples to generate CT predictions. Our dataset contains 13 healthy subjects, each with T1- and T2-weighted MR and CT brain images. This method provides CT predictions with a mean absolute error of 110.1 Hounsfield units, Pearson linear correlation of 0.82, peak signal-to-noise ratio of 24.81 dB, and Dice in bone regions of 0.84 as compared with real CTs. CT substitute-based PET reconstruction has a regression slope of 1.0084 and R 2 of 0.9903 compared with real CT-based PET. In this method, no

  7. Derivation of a Clinical Model to Predict Unchanged Inpatient Echocardiograms.

    Science.gov (United States)

    Gunderson, Craig G; Gromisch, Elizabeth S; Chang, John J; Malm, Brian J

    2018-03-01

    Transthoracic echocardiography (TTE) is one of the most commonly ordered tests in healthcare. Repeat TTE, defined as a TTE done within 1 year of a prior TTE, represents 24% to 42% of all studies. The purpose of this study was to derive a clinical prediction model to predict unchanged repeat TTE, with the goal of defining a subset of studies that are unnecessary. Single-center retrospective cohort study of all hospitalized patients who had a repeat TTE between October 1, 2013, and September 30, 2014. Two hundred eleven of 601 TTEs were repeat studies, of which 78 (37%) had major changes. Five variables were independent predictors of major new TTE changes, including history of intervening acute myocardial infarction, cardiothoracic surgery, major new electrocardiogram (ECG) changes, prior valve disease, and chronic kidney disease. Using the β-coefficient for each of these variables, we defined a clinical prediction model that we named the CAVES score. The acronym CAVES stands for chronic kidney disease, acute myocardial infarction, valvular disease, ECG changes, and surgery (cardiac). The prevalence of major TTE change for the full cohort was 35%. For the group with a CAVES score of -1, that probability was only 5.6%; for the group with a score of 0, the probability was 17.7%; and for the group with a score ≥1, the probability was 55.3%. The bootstrap corrected C statistic for the model was 0.78 (95% confidence interval, 0.72-0.85), indicating good discrimination. Overall, the CAVES score had good discrimination and calibration. If further validated, it may be useful to predict repeat TTEs that are unlikely to have major changes.

  8. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  9. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  10. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  11. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  12. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  13. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  14. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    Science.gov (United States)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  15. Predicting fatigue crack initiation through image-based micromechanical modeling

    International Nuclear Information System (INIS)

    Cheong, K.-S.; Smillie, Matthew J.; Knowles, David M.

    2007-01-01

    The influence of individual grain orientation on early fatigue crack initiation in a four-point bend fatigue test was investigated numerically and experimentally. The 99.99% aluminium test sample was subjected to high cycle fatigue (HCF) and the top surface microstructure within the inner span of the sample was characterized using electron-beam backscattering diffraction (EBSD). Applying a finite-element submodelling approach, the microstructure was digitally reconstructed and refined studies carried out in regions where fatigue damage was observed. The constitutive behaviour of aluminium was described by a crystal plasticity model which considers the evolution of dislocations and accumulation of edge dislocation dipoles. Using an energy-based approach to quantify fatigue damage, the model correctly predicts regions in grains where early fatigue crack initiation was observed. The tendency for fatigue cracks to initiate in these grains appears to be strongly linked to the orientations of the grains relative to the direction of loading - grains less favourably aligned with respect to the loading direction appear more susceptible to fatigue crack initiation. The limitations of this modelling approach are also highlighted and discussed, as some grains predicted to initiate cracks did not show any visible signs of fatigue cracking in the same locations during testing

  16. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  17. Performance of the high-resolution atmospheric model HRRR-AK for correcting geodetic observations from spaceborne radars.

    Science.gov (United States)

    Gong, W; Meyer, F J; Webley, P; Morton, D

    2013-10-27

    [1] Atmospheric phase delays are considered to be one of the main performance limitations for high-quality satellite radar techniques, especially when applied to ground deformation monitoring. Numerical weather prediction (NWP) models are widely seen as a promising tool for the mitigation of atmospheric delays as they can provide knowledge of the atmospheric conditions at the time of Synthetic Aperture Radar data acquisition. However, a thorough statistical analysis of the performance of using NWP production in radar signal correction is missing to date. This study provides a quantitative analysis of the accuracy in using operational NWP products for signal delay correction in satellite radar geodetic remote sensing. The study focuses on the temperate, subarctic, and Arctic climate regions due to a prevalence of relevant geophysical signals in these areas. In this study, the operational High Resolution Rapid Refresh over the Alaska region (HRRR-AK) model is used and evaluated. Five test sites were selected over Alaska (AK), USA, covering a wide range of climatic regimes that are commonly encountered in high-latitude regions. The performance of the HRRR-AK NWP model for correcting absolute atmospheric range delays of radar signals is assessed by comparing to radiosonde observations. The average estimation accuracy for the one-way zenith total atmospheric delay from 24 h simulations was calculated to be better than ∼14 mm. This suggests that the HRRR-AK operational products are a good data source for spaceborne geodetic radar observations atmospheric delay correction, if the geophysical signal to be observed is larger than 20 mm.

  18. Scaled distribution mapping: a bias correction method that preserves raw climate model projected changes

    Directory of Open Access Journals (Sweden)

    M. B. Switanek

    2017-06-01

    Full Text Available Commonly used bias correction methods such as quantile mapping (QM assume the function of error correction values between modeled and observed distributions are stationary or time invariant. This article finds that this function of the error correction values cannot be assumed to be stationary. As a result, QM lacks justification to inflate/deflate various moments of the climate change signal. Previous adaptations of QM, most notably quantile delta mapping (QDM, have been developed that do not rely on this assumption of stationarity. Here, we outline a methodology called scaled distribution mapping (SDM, which is conceptually similar to QDM, but more explicitly accounts for the frequency of rain days and the likelihood of individual events. The SDM method is found to outperform QM, QDM, and detrended QM in its ability to better preserve raw climate model projected changes to meteorological variables such as temperature and precipitation.

  19. A Bayesian model to estimate the true 3-D shadowing correction in sonic anemometers

    Science.gov (United States)

    Frank, J. M.; Massman, W. J.; Ewers, B. E.

    2015-12-01

    Sonic anemometers are the principal instruments used in micrometeorological studies of turbulence and ecosystem fluxes. Recent studies have shown the most common designs underestimate vertical wind measurements because they lack a correction for transducer and structural shadowing; there is no consensus describing a true correction. We introduce a novel Bayesian analysis with the potential to resolve the three-dimensional (3-D) correction by optimizing differences between anemometers mounted simultaneously vertical and horizontal. The analysis creates a geodesic grid around the sonic anemometer, defines a state variable for the 3-D correction at each point, and assigns each a prior distribution based on literature with ±10% uncertainty. We use the Markov chain Monte Carlo (MCMC) method to update and apply the 3-D correction to a dataset of 20-Hz sonic anemometer measurements, calculate five-minute standard deviations of the Cartesian wind components, and compare these statistics between vertical and horizontal anemometers. We present preliminary analysis of the CSAT3 anemometer using 642 grid points (±4.5° resolution) from 423 five-minute periods (8,964,000 samples) collected during field experiments in 2011 and 2013. The 20-Hz data was not equally distributed around the grid; half of the samples occurred in just 8% of the grid points. For populous grid points (weighted by the abundance of samples) the average correction increased from prior to posterior (+5.4±10.0% to +9.1±9.5%) while for desolate grid points (weighted by the sparseness of samples) there was minimal change (+6.4±10.0% versus +6.6±9.8%), demonstrating that with a sufficient number of samples the model can determine the true correction is ~67% higher than proposed in recent literature. Future adaptions will increase the grid resolution and sample size to reduce the uncertainty in the posterior distributions and more precisely quantify the 3-D correction.

  20. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  1. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  2. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    Science.gov (United States)

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  3. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  4. Prediction beyond the survey sample: correcting for survey effects on consumer decisions.

    NARCIS (Netherlands)

    C. Heij (Christiaan); Ph.H.B.F. Franses (Philip Hans)

    2006-01-01

    textabstractDirect extrapolation of survey results on purchase intentions may give a biased view on actual consumer behavior. This is because the purchase intentions of consumers may be affected by the survey itself. On the positive side, such effects can be incorporated in econometric models to get

  5. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    The ABC transporter P-glycoprotein (P-gp) actively transports a wide range of drugs and toxins out of cells, and is therefore related to multidrug resistance and the ADME profile of therapeutics. Thus, development of predictive in silico models for the identification of P-gp inhibitors is of great......Score resulted in the correct prediction of 61 % of the external test set. This demonstrates that ligand-based models currently remain the methods of choice for accurately predicting P-gp inhibitors. However, structure-based classification offers information about possible drug/protein interactions, which helps...

  6. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  7. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  8. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  9. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  10. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  11. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  12. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  13. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  14. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, B.J.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  15. Center-of-mass corrections in the S+V potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.

    1987-02-01

    Center-of-mass corrections to the mass spectrum and static properties of low-lying S-wave baryons and mesons are discussed in the context of a relativistic, independent quark model, based on a Dirac equation, with equally mixed scalar (S) and vector (V) confining potential. (author) [pt

  16. Predictive Models for Semiconductor Device Design and Processing

    Science.gov (United States)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1998-01-01

    The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.

  17. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  18. Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides

    Science.gov (United States)

    Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.

    1993-01-01

    The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.

  19. Kaluza-Klein Mode Corrections to bar {B}-> Xsγ in Acd Model

    Science.gov (United States)

    Gao, Tie-Jun; Feng, Tai-Fu; Chen, Jian-Bin

    We discuss the corrections from the Kaluza-Klein (KK) modes to the branching ratio of the rare process bar {B}-> Xsγ in extension of the standard model with a universal extra dimension, which was proposed by Appelquist, Cheng and Dobrescu (ACD). Assuming 1/R≫mW, we sum over the series composed by the KK towers, and get the decoupling result in the limit 1/R→∞. The numerical analysis indicates that the corrections from the KK-excitations to the branching ratio of bar {B}-> Xsγ is about 3.5% as 1/R = 400 GeV.

  20. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  1. An employer brand predictive model for talent attraction and retention

    Directory of Open Access Journals (Sweden)

    Annelize Botha

    2011-11-01

    Full Text Available Orientation: In an ever shrinking global talent pool organisations use employer brand to attract and retain talent, however, in the absence of theoretical pointers, many organisations are losing out on a powerful business tool by not developing or maintaining their employer brand correctly. Research purpose: This study explores the current state of knowledge about employer brand and identifies the various employer brand building blocks which are conceptually integrated in a predictive model. Motivation for the study: The need for scientific progress though the accurate representation of a set of employer brand phenomena and propositions, which can be empirically tested, motivated this study. Research design, approach and method: This study was nonempirical in approach and searched for linkages between theoretical concepts by making use of relevant contextual data. Theoretical propositions which explain the identified linkages were developed for purpose of further empirical research. Main findings: Key findings suggested that employer brand is influenced by target group needs, a differentiated Employer Value Proposition (EVP, the people strategy, brand consistency, communication of the employer brand and measurement of Human Resources (HR employer branding efforts. Practical/managerial implications: The predictive model provides corporate leaders and their human resource functionaries a theoretical pointer relative to employer brand which could guide more effective talent attraction and retention decisions. Contribution/value add: This study adds to the small base of research available on employer brand and contributes to both scientific progress as well as an improved practical understanding of factors which influence employer brand.

  2. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  3. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  4. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  5. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  6. Explicit modeling of ancestry improves polygenic risk scores and BLUP prediction

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J.; Kraft, Peter; Price, Alkes L.

    2016-01-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color, tanning ability and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRS) and Best Linear Unbiased Prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R2 for hair color increased by 66% (0.0456 to 0.0755; pancestry, which prevents ancestry effects from entering into each SNP effect and being over-weighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. PMID:25995153

  7. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  8. Correction of TRMM 3B42V7 Based on Linear Regression Models over China

    Directory of Open Access Journals (Sweden)

    Shaohua Liu

    2016-01-01

    Full Text Available High temporal-spatial precipitation is necessary for hydrological simulation and water resource management, and remotely sensed precipitation products (RSPPs play a key role in supporting high temporal-spatial precipitation, especially in sparse gauge regions. TRMM 3B42V7 data (TRMM precipitation is an essential RSPP outperforming other RSPPs. Yet the utilization of TRMM precipitation is still limited by the inaccuracy and low spatial resolution at regional scale. In this paper, linear regression models (LRMs have been constructed to correct and downscale the TRMM precipitation based on the gauge precipitation at 2257 stations over China from 1998 to 2013. Then, the corrected TRMM precipitation was validated by gauge precipitation at 839 out of 2257 stations in 2014 at station and grid scales. The results show that both monthly and annual LRMs have obviously improved the accuracy of corrected TRMM precipitation with acceptable error, and monthly LRM performs slightly better than annual LRM in Mideastern China. Although the performance of corrected TRMM precipitation from the LRMs has been increased in Northwest China and Tibetan plateau, the error of corrected TRMM precipitation is still significant due to the large deviation between TRMM precipitation and low-density gauge precipitation.

  9. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  10. ROI-ORIENTATED SENSOR CORRECTION BASED ON VIRTUAL STEADY REIMAGING MODEL FOR WIDE SWATH HIGH RESOLUTION OPTICAL SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. Zhu

    2017-09-01

    Full Text Available To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

  11. Predicting the influence of a p2-symmetric substrate on molecular self-organization with an interaction-site model.

    Science.gov (United States)

    Rohr, Carsten; Balbás Gambra, Marta; Gruber, Kathrin; Höhl, Cornelia; Malarek, Michael S; Scherer, Lukas J; Constable, Edwin C; Franosch, Thomas; Hermann, Bianca A

    2011-02-14

    An interaction-site model can a priori predict molecular self-organisation on a new substrate in Monte Carlo simulations. This is experimentally confirmed with scanning tunnelling microscopy on Fréchet dendrons of a pentacontane template. Local and global ordering motifs, inclusion molecules and a rotated unit cell are correctly predicted.

  12. The ρ - ω mass difference in a relativistic potential model with pion corrections

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    The problem of the ρ - ω mass difference is studied in the framework of the relativistic, harmonic, S+V independent quark model implemented by center-of-mass, one-gluon exchange and plon-cloud corrections stemming from the requirement of chiral symmetry in the (u,d) SU(2) flavour sector of the model. The plonic self-energy corrections with different intermediate energy states are instrumental of the analysis of the problem, which requires and appropriate parametrization of the mesonic sector different from that previously used to calculate the mass spectrum of the S-wave baryons. The right ρ - ω mass splitting is found, together with a satisfactory value for the mass of the pion, calculated as a bound-state of a quark-antiquark pair. An analogous discussion based on the cloudy-bag model is also presented. (author) [pt

  13. The Export Supply Model of Bangladesh: An Application of Cointegration and Vector Error Correction Approaches

    Directory of Open Access Journals (Sweden)

    Mahmudul Mannan Toy

    2011-01-01

    Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.

  14. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    models. The GDC model was further extended to describe two-region (bimodal) soils and could describe and predict Dp/Do well for both different soil aggregate size fractions and variably compacted volcanic ash soils. A possible use of the new GDC model is engineering applications such as the design...... of highly compacted landfill site caps....

  15. Correction induced by irrelevant operators in the correlators of the two-dimensional Ising model in a magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Caselle, M.; Grinza, P. [Dipartimento di Fisica Teorica dell' Universita di Torino and Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Torino (Italy)]. E-mails: caselle@to.infn.it; grinza@to.infn.it; Magnoli, N. [Dipartimento di Fisica, Universita di Genova and Istituto Nazionale di Fisica Nucleare, Sezione di Genova, Genova (Italy)]. E-mail: magnoli@ge.infn.it

    2001-10-26

    We investigate the presence of irrelevant operators in the two-dimensional Ising model perturbed by a magnetic field, by studying the corrections induced by these operators in the spin-spin correlator of the model. To this end we perform a set of high-precision simulations for the correlator both along the axes and along the diagonal of the lattice. By comparing the numerical results with the predictions of a perturbative expansion around the critical point we find unambiguous evidence of the presence of such irrelevant operators. It turns out that among the irrelevant operators the one which gives the largest correction is the spin-4 operator T{sup 2}+T-bar{sup 2}, which accounts for the breaking of the rotational invariance due to the lattice. This result agrees with what was already known for the correlator evaluated exactly at the critical point and also with recent results obtained in the case of the thermal perturbation of the model. (author)

  16. A prediction model for the grade of liver fibrosis using magnetic resonance elastography.

    Science.gov (United States)

    Mitsuka, Yusuke; Midorikawa, Yutaka; Abe, Hayato; Matsumoto, Naoki; Moriyama, Mitsuhiko; Haradome, Hiroki; Sugitani, Masahiko; Tsuji, Shingo; Takayama, Tadatoshi

    2017-11-28

    Liver stiffness measurement (LSM) has recently become available for assessment of liver fibrosis. We aimed to develop a prediction model for liver fibrosis using clinical variables, including LSM. We performed a prospective study to compare liver fibrosis grade with fibrosis score. LSM was measured using magnetic resonance elastography in 184 patients that underwent liver resection, and liver fibrosis grade was diagnosed histologically after surgery. Using the prediction model established in the training group, we validated the classification accuracy in the independent test group. First, we determined a cut-off value for stratifying fibrosis grade using LSM in 122 patients in the training group, and correctly diagnosed fibrosis grades of 62 patients in the test group with a total accuracy of 69.3%. Next, on least absolute shrinkage and selection operator analysis in the training group, LSM (r = 0.687, P prediction model. This prediction model applied to the test group correctly diagnosed 32 of 36 (88.8%) Grade I (F0 and F1) patients, 13 of 18 (72.2%) Grade II (F2 and F3) patients, and 7 of 8 (87.5%) Grade III (F4) patients in the test group, with a total accuracy of 83.8%. The prediction model based on LSM, ICGR15, and platelet count can accurately and reproducibly predict liver fibrosis grade.

  17. Ab initio thermochemistry using optimal-balance models with isodesmic corrections: The ATOMIC protocol

    Science.gov (United States)

    Bakowies, Dirk

    2009-04-01

    A theoretical composite approach, termed ATOMIC for Ab initio Thermochemistry using Optimal-balance Models with Isodesmic Corrections, is introduced for the calculation of molecular atomization energies and enthalpies of formation. Care is taken to achieve optimal balance in accuracy and cost between the various components contributing to high-level estimates of the fully correlated energy at the infinite-basis-set limit. To this end, the energy at the coupled-cluster level of theory including single, double, and quasiperturbational triple excitations is decomposed into Hartree-Fock, low-order correlation (MP2, CCSD), and connected-triples contributions and into valence-shell and core contributions. Statistical analyses for 73 representative neutral closed-shell molecules containing hydrogen and at least three first-row atoms (CNOF) are used to devise basis-set and extrapolation requirements for each of the eight components to maintain a given level of accuracy. Pople's concept of bond-separation reactions is implemented in an ab initio framework, providing for a complete set of high-level precomputed isodesmic corrections which can be used for any molecule for which a valence structure can be drawn. Use of these corrections is shown to lower basis-set requirements dramatically for each of the eight components of the composite model. A hierarchy of three levels is suggested for isodesmically corrected composite models which reproduce atomization energies at the reference level of theory to within 0.1 kcal/mol (A), 0.3 kcal/mol (B), and 1 kcal/mol (C). Large-scale statistical analysis shows that corrections beyond the CCSD(T) reference level of theory, including coupled-cluster theory with fully relaxed connected triple and quadruple excitations, first-order relativistic and diagonal Born-Oppenheimer corrections can normally be dealt with using a greatly simplified model that assumes thermoneutral bond-separation reactions and that reduces the estimate of these

  18. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  19. Splice-correcting oligonucleotides restore BTK function in X-linked agammaglobulinemia model.

    Science.gov (United States)

    Bestas, Burcu; Moreno, Pedro M D; Blomberg, K Emelie M; Mohammad, Dara K; Saleh, Amer F; Sutlu, Tolga; Nordin, Joel Z; Guterstam, Peter; Gustafsson, Manuela O; Kharazi, Shabnam; Piątosa, Barbara; Roberts, Thomas C; Behlke, Mark A; Wood, Matthew J A; Gait, Michael J; Lundin, Karin E; El Andaloussi, Samir; Månsson, Robert; Berglöf, Anna; Wengel, Jesper; Smith, C I Edvard

    2014-09-01

    X-linked agammaglobulinemia (XLA) is an inherited immunodeficiency that results from mutations within the gene encoding Bruton's tyrosine kinase (BTK). Many XLA-associated mutations affect splicing of BTK pre-mRNA and severely impair B cell development. Here, we assessed the potential of antisense, splice-correcting oligonucleotides (SCOs) targeting mutated BTK transcripts for treating XLA. Both the SCO structural design and chemical properties were optimized using 2'-O-methyl, locked nucleic acid, or phosphorodiamidate morpholino backbones. In order to have access to an animal model of XLA, we engineered a transgenic mouse that harbors a BAC with an authentic, mutated, splice-defective human BTK gene. BTK transgenic mice were bred onto a Btk knockout background to avoid interference of the orthologous mouse protein. Using this model, we determined that BTK-specific SCOs are able to correct aberrantly spliced BTK in B lymphocytes, including pro-B cells. Correction of BTK mRNA restored expression of functional protein, as shown both by enhanced lymphocyte survival and reestablished BTK activation upon B cell receptor stimulation. Furthermore, SCO treatment corrected splicing and restored BTK expression in primary cells from patients with XLA. Together, our data demonstrate that SCOs can restore BTK function and that BTK-targeting SCOs have potential as personalized medicine in patients with XLA.

  20. PREDICTIVE CAPACITY OF INSOLVENCY MODELS BASED ON ACCOUNTING NUMBERS AND DESCRIPTIVE DATA

    Directory of Open Access Journals (Sweden)

    Rony Petson Santana de Souza

    2012-09-01

    Full Text Available In Brazil, research into models to predict insolvency started in the 1970s, with most authors using discriminant analysis as a statistical tool in their models. In more recent years, authors have increasingly tried to verify whether it is possible to forecast insolvency using descriptive data contained in firms’ reports. This study examines the capacity of some insolvency models to predict the failure of Brazilian companies that have gone bankrupt. The study is descriptive in nature with a quantitative approach, based on research of documents. The sample is omposed of 13 companies that were declared bankrupt between 1997 and 2003. The results indicate that the majority of the insolvency prediction models tested showed high rates of correct forecasts. The models relying on descriptive reports on average were more likely to succeed than those based on accounting figures. These findings demonstrate that although some studies indicate a lack of validity of predictive models created in different business settings, some of these models have good capacity to forecast insolvency in Brazil. We can conclude that both models based on accounting numbers and those relying on descriptive reports can predict the failure of firms. Therefore, it can be inferred that the majority of bankruptcy prediction models that make use of accounting numbers can succeed in predicting the failure of firms.

  1. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  2. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  3. Bias Correction for climate impact modeling within the framework of the HAPPI Initiative

    Science.gov (United States)

    Saeed, Fahad; Lange, Stefan; Schleussner, Carl-Friedrich

    2017-04-01

    In its landmark Paris Agreement of 2015, the Conference of the Parties of the United Nations Framework Convention on Climate Change (UNFCCC) invited the IPCC to prepare a special report "on the impacts of global warming of 1.5°C above pre-industrial levels and related greenhouse gas emission pathways" by 2018. Unfortunately, most current experiments (including Coupled Model Inter-comparison Project (CMIP)), are not specifically designed for making a substantial contribution to this report. To fill this gap, the HAPPI (Half a degree Additional warming, Projection, Prognosis and Impacts) initiative has been designed to assess climate projections, and in particular extreme weather, at present day and in worlds that are 1.5°C and 2.0°C warmer than pre-industrial conditions? Global Climate Model (GCM) output for HAPPI will be utilized to assess climate impacts with a range of sectorial climate impact models. Before the use of climate data as input for sectorial impact models, statistical bias correction is commonly applied to correct climate model data for systematic deviations of the simulated historic data from observations and to increase the accuracy of the projections. Different approaches have been adopted for this purpose, however the most common are the one based on transfer functions generated to map the distribution of the simulated historical data to that of the observations. In the current study, we presented results for a novel bias correction method developed for Inter-Sectoral Impact Model Intercomparison Project Phase 2b (ISIMIP2b) applied to output of different GCMs generated within the HAPPI project. The results indicate that the application of bias correction has shown substantial improvement in the results when we compared to observational data. Besides the marked improvement in seasonal mean differences for different variables, also the output for extreme event indicators is considerably improved. We conclude that the applied application of bias

  4. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    Science.gov (United States)

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  5. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  6. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  7. Hydrological Modeling in Northern Tunisia with Regional Climate Model Outputs: Performance Evaluation and Bias-Correction in Present Climate Conditions

    Directory of Open Access Journals (Sweden)

    Asma Foughali

    2015-07-01

    Full Text Available This work aims to evaluate the performance of a hydrological balance model in a watershed located in northern Tunisia (wadi Sejnane, 378 km2 in present climate conditions using input variables provided by four regional climate models. A modified version (MBBH of the lumped and single layer surface model BBH (Bucket with Bottom Hole model, in which pedo-transfer parameters estimated using watershed physiographic characteristics are introduced is adopted to simulate the water balance components. Only two parameters representing respectively the water retention capacity of the soil and the vegetation resistance to evapotranspiration are calibrated using rainfall-runoff data. The evaluation criterions for the MBBH model calibration are: relative bias, mean square error and the ratio of mean actual evapotranspiration to mean potential evapotranspiration. Daily air temperature, rainfall and runoff observations are available from 1960 to 1984. The period 1960–1971 is selected for calibration while the period 1972–1984 is chosen for validation. Air temperature and precipitation series are provided by four regional climate models (DMI, ARP, SMH and ICT from the European program ENSEMBLES, forced by two global climate models (GCM: ECHAM and ARPEGE. The regional climate model outputs (precipitation and air temperature are compared to the observations in terms of statistical distribution. The analysis was performed at the seasonal scale for precipitation. We found out that RCM precipitation must be corrected before being introduced as MBBH inputs. Thus, a non-parametric quantile-quantile bias correction method together with a dry day correction is employed. Finally, simulated runoff generated using corrected precipitation from the regional climate model SMH is found the most acceptable by comparison with runoff simulated using observed precipitation data, to reproduce the temporal variability of mean monthly runoff. The SMH model is the most accurate to

  8. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  9. Adolescent Idiopathic Scoliosis Thoracic Volume Modeling: The Effect of Surgical Correction.

    Science.gov (United States)

    Wozniczka, Jennifer K; Ledonio, Charles G T; Polly, David W; Rosenstein, Benjamin E; Nuckley, David J

    2017-12-01

    Scoliosis has been shown to have detrimental effects on pulmonary function, traditionally measured by pulmonary function tests, which is theorized to be correlated to the distortion of the spine and thorax. The changes in thoracic volume with surgical correction have not been well quantified. This study seeks to define the effect of surgical correction on thoracic volume in patients with adolescent idiopathic scoliosis. Images were obtained from adolescents with idiopathic scoliosis enrolled in a multicenter database (Prospective Pediatric Scoliosis Study). A convenience sample of patients with Lenke type 1 curves with a complete data set meeting specific parameters was used. Blender v2.63a software was used to construct a 3-dimensional (3D) computational model of the spine from 2-dimensional calibrated radiographs. To accomplish this, the 3D thorax model was deformed to match the calibrated radiographs. The thorax volume was then calculated in cubic centimeters using Mimics v15 software. The results using this computational modeling technique demonstrated that surgical correction resulted in decreased curve measurement as determined by Cobb method, and increased postoperative thoracic volume as expected. Thoracic volume significantly increased by a mean of 567 mm (P3D changes in thoracic volumes using 2-dimensional imaging. This is an assessment of the novel modeling technique, to be used in larger future studies to assess clinical significance. Level 3-retrospective comparison of prospectively collected data.

  10. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2016-06-01

    Full Text Available The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015, more datasets (a time span of almost two years were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the

  11. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  12. A Big Data Approach for Situation-Aware estimation, correction and prediction of aerosol effects, based on MODIS Joint Atmosphere product (collection 6) time series data

    Science.gov (United States)

    Singh, A. K.; Toshniwal, D.

    2017-12-01

    The MODIS Joint Atmosphere product, MODATML2 and MYDATML2 L2/3 provided by LAADS DAAC (Level-1 and Atmosphere Archive & Distribution System Distributed Active Archive Center) re-sampled from medium resolution MODIS Terra /Aqua Satellites data at 5km scale, contains Cloud Reflectance, Cloud Top Temperature, Water Vapor, Aerosol Optical Depth/Thickness, Humidity data. These re-sampled data, when used for deriving climatic effects of aerosols (particularly in case of cooling effect) still exposes limitations in presence of uncertainty measures in atmospheric artifacts such as aerosol, cloud, cirrus cloud etc. The effect of uncertainty measures in these artifacts imposes an important challenge for estimation of aerosol effects, adequately affecting precise regional weather modeling and predictions: Forecasting and recommendation applications developed largely depend on these short-term local conditions (e.g. City/Locality based recommendations to citizens/farmers based on local weather models). Our approach inculcates artificial intelligence technique for representing heterogeneous data(satellite data along with air quality data from local weather stations (i.e. in situ data)) to learn, correct and predict aerosol effects in the presence of cloud and other atmospheric artifacts, defusing Spatio-temporal correlations and regressions. The Big Data process pipeline consisting correlation and regression techniques developed on Apache Spark platform can easily scale for large data sets including many tiles (scenes) and over widened time-scale. Keywords: Climatic Effects of Aerosols, Situation-Aware, Big Data, Apache Spark, MODIS Terra /Aqua, Time Series

  13. Measurements and IRI Model Predictions During the Recent Solar Minimum

    Science.gov (United States)

    Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.

    2012-01-01

    Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.

  14. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  15. Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report - December 2016.

    Energy Technology Data Exchange (ETDEWEB)

    Copland, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    This Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report (CCM/CME Report) has been prepared by the U.S. Department of Energy (DOE) and Sandia Corporation (Sandia) to meet requirements under the Sandia National Laboratories-New Mexico (SNL/NM) Compliance Order on Consent (the Consent Order). The Consent Order, entered into by the New Mexico Environment Department (NMED), DOE, and Sandia, became effective on April 29, 2004. The Consent Order identified the Tijeras Arroyo Groundwater (TAG) Area of Concern (AOC) as an area of groundwater contamination requiring further characterization and corrective action. This report presents an updated Conceptual Site Model (CSM) of the TAG AOC that describes the contaminant release sites, the geological and hydrogeological setting, and the distribution and migration of contaminants in the subsurface. The dataset used for this report includes the analytical results from groundwater samples collected through December 2015.

  16. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  17. Exports and economic growth in Indonesia's fishery sub-sector: Cointegration and error-correction models

    OpenAIRE

    Sjarif, Indra Nurcahyo; 小谷, 浩示; Lin, Ching-Yang

    2011-01-01

    This paper investigates the causal relationship between fishery's exports and its economic growth in Indonesia by utilizing cointegration and error-correction models. Using annual data from 1969 to 2005, we find the evidence that there exist the long-run relationship as well as bi-directional causality between exports and economic growth in Indonesia's fishery sub-sector. To the best of our knowledge, this is the first research that examine this issue focusing on a natural resource based indu...

  18. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  19. Applying volumetric weather radar data for rainfall runoff modeling: The importance of error correction.

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.

    2009-04-01

    In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar

  20. Standard model treatment of the radiative corrections to the neutron β-decay

    International Nuclear Information System (INIS)

    Bunatyan, G.G.

    2003-01-01

    Starting with the basic Lagrangian of the Standard Model, the radiative corrections to the neutron β-decay are acquired. The electroweak interactions are consistently taken into consideration amenably to the Weinberg-Salam theory. The effect of the strong quark-quark interactions on the neutron β-decay is parametrized by introducing the nucleon electromagnetic form factors and the weak nucleon transition current specified by the form factors g V , g A , ... The radiative corrections to the total decay probability W and to the asymmetry coefficient of the momentum distribution A are obtained to constitute δW ∼ 8.7 %, δA ∼ -2 %. The contribution to the radiative corrections due to allowance for the nucleon form factors and the nucleon excited states amounts up to a few per cent of the whole value of the radiative corrections. The ambiguity in description of the nucleon compositeness is surely what causes the uncertainties ∼ 0.1 % in evaluation of the neutron β-decay characteristics. For now, this puts bounds to the precision attainable in obtaining the element V ud of the CKM matrix and the g V , g A , ... values from experimental data processing

  1. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  2. Ensemble Kalman filter assimilation of temperature and altimeter data with bias correction and application to seasonal prediction

    Directory of Open Access Journals (Sweden)

    C. L. Keppenne

    2005-01-01

    Full Text Available To compensate for a poorly known geoid, satellite altimeter data is usually analyzed in terms of anomalies from the time mean record. When such anomalies are assimilated into an ocean model, the bias between the climatologies of the model and data is problematic. An ensemble Kalman filter (EnKF is modified to account for the presence of a forecast-model bias and applied to the assimilation of TOPEX/Poseidon (T/P altimeter data. The online bias correction (OBC algorithm uses the same ensemble of model state vectors to estimate biased-error and unbiased-error covariance matrices. Covariance localization is used but the bias covariances have different localization scales from the unbiased-error covariances, thereby accounting for the fact that the bias in a global ocean model could have much larger spatial scales than the random error.The method is applied to a 27-layer version of the Poseidon global ocean general circulation model with about 30-million state variables. Experiments in which T/P altimeter anomalies are assimilated show that the OBC reduces the RMS observation minus forecast difference for sea-surface height (SSH over a similar EnKF run in which OBC is not used. Independent in situ temperature observations show that the temperature field is also improved. When the T/P data and in situ temperature data are assimilated in the same run and the configuration of the ensemble at the end of the run is used to initialize the ocean component of the GMAO coupled forecast model, seasonal SSH hindcasts made with the coupled model are generally better than those initialized with optimal interpolation of temperature observations without altimeter data. The analysis of the corresponding sea-surface temperature hindcasts is not as conclusive.

  3. Forward and Inverse Predictive Model for the Trajectory Tracking Control of a Lower Limb Exoskeleton for Gait Rehabilitation: Simulation modelling analysis

    Science.gov (United States)

    Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.

    2018-03-01

    The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.

  4. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    Science.gov (United States)

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  5. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  6. A theoretical model for predicting the Peak Cutting Force of conical picks

    Directory of Open Access Journals (Sweden)

    Gao Kuidong

    2014-01-01

    Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter  and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.

  7. Comprehensive and critical review of the predictive properties of the various mass models

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, there is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models

  8. Comparison of methods for transfer of calibration models in near-infared spectroscopy: a case study based on correcting path length differences using fiber-optic transmittance probes in in-line near-infrared spectroscopy.

    Science.gov (United States)

    Sahni, Narinder Singh; Isaksson, Tomas; Naes, Tormod

    2005-04-01

    This article addresses problems related to transfer of calibration models due to variations in distance between the transmittance fiber-optic probes. The data have been generated using a mixture design and measured at five different probe distances. A number of techniques reported in the literature have been compared. These include multiplicative scatter correction (MSC), path length correction (PLC), finite impulse response (FIR), orthogonal signal correction (OSC), piecewise direct standardization (PDS), and robust calibration. The quality of the predictions was expressed in terms of root mean square error of prediction (RMSEP). Robust calibration gave good calibration transfer results, while the other methods did not give acceptable results.

  9. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  10. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  11. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  12. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  13. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    Science.gov (United States)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  16. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Planning for corrective osteotomy of the femoral bone using 3D-modeling. Part I

    Directory of Open Access Journals (Sweden)

    Alexey G Baindurashvili

    2016-09-01

    Full Text Available Introduction. In standard planning for corrective hip osteotomy, a surgical intervention scheme is created on a uniplanar paper medium on the basis of X-ray images. However, uniplanar skiagrams are unable to render real spatial configuration of the femoral bone. When combining three-dimensional and uniplanar models of bone, human errors inevitably occur, causing the distortion of preset parameters, which may lead to glaring errors and, as a result, to repeated operations. Aims. To develop a new three-dimensional method for planning and performing corrective osteotomy of the femoral bone, using visualizing computer technologies. Materials and methods. A new method of planning for corrective hip osteotomy in children with various hip joint pathologies was developed. We examined the method using 27 patients [aged 5–18 years (32 hip joints] with congenital and acquired femoral bone deformation. The efficiency of the proposed method was assessed in comparison with uniplanar planning using roentgenograms. Conclusions. Computerized operation planning using three-dimensional modeling improves treatment results by minimizing the likelihood of human errors and increasing planning and surgical intervention  accuracy.

  18. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Science.gov (United States)

    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John

    2015-06-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].

  19. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  20. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  2. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  3. A geometric model of a V-slit Sun sensor correcting for spacecraft wobble

    Science.gov (United States)

    Mcmartin, W. P.; Gambhir, S. S.

    1994-01-01

    A V-Slit sun sensor is body-mounted on a spin-stabilized spacecraft. During injection from a parking or transfer orbit to some final orbit, the spacecraft may not be dynamically balanced. This may result in wobble about the spacecraft spin axis as the spin axis may not be aligned with the spacecraft's axis of symmetry. While the widely used models in Spacecraft Attitude Determination and Control, edited by Wertz, correct for separation, elevation, and azimuthal mounting biases, spacecraft wobble is not taken into consideration. A geometric approach is used to develop a method for measurement of the sun angle which corrects for the magnitude and phase of spacecraft wobble. The algorithm was implemented using a set of standard mathematical routines for spherical geometry on a unit sphere.

  4. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  5. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  6. Research of combination model for prediction of the trend of outbreak of hepatitis B

    Directory of Open Access Journals (Sweden)

    Yin-ping CHEN

    2014-03-01

    Full Text Available Objective To establish a combination model of autoregressive integrated moving average model and the grey dynamics (ARIMA-GM of hepatitis B incidence rate (1/100 000 to predict the trend of outbreak of hepatitis B, as to provide a scientific basis for the early discovery of the infectious diseases for the performance of countermeasures of controlling its spread. Methods The monthly incidence of hepatitis B in Qian'an city, Hebei province, was collected from Jan 2004 to Dec 2012, and a model (ARIMA was reproduced with SPSS software. The GM (1,1 model was used to correct the residual sequence with a threshold value, and a combined forecasting model was reproduced. This combination model was used to predict the monthly incidence rate in this city in 2013. Results The model ARIMA(0,1,1(0,1,112 was established successfully and the residual sequence was a white noise sequence. Then the GM (1,1 model with a threshold of 3 was used to correct its residuals and obtain its nonlinear feature extraction of information. The forecasting model met required precision standards (C=0.673, P=0.877, the fitting accuracy of which was basically qualified. The results showed that the MAE, MAPE of the ARIMA-GM combined model were smaller than that of a single model, and the combined model could improve the prediction accuracy. Using the combined model to forecast the incidence of hepatitis B during Jan 2013 to Dec 2013, the overall trend was relatively consistent with the condition of previous years. Conclusion The ARIMA-GM combined model can better fit the incidence rate of hepatitis B with a greater accuracy than the seasonal ARIMA model. The prediction results can provide the reference for the early warning system of HBV. DOI: 10.11855/j.issn.0577-7402.2014.01.12

  7. A simple physical model predicts small exon length variations.

    Directory of Open Access Journals (Sweden)

    2006-04-01

    Full Text Available One of the most common splice variations are small exon length variations caused by the use of alternative donor or acceptor splice sites that are in very close proximity on the pre-mRNA. Among these, three-nucleotide variations at so-called NAGNAG tandem acceptor sites have recently attracted considerable attention, and it has been suggested that these variations are regulated and serve to fine-tune protein forms by the addition or removal of a single amino acid. In this paper we first show that in-frame exon length variations are generally overrepresented and that this overrepresentation can be quantitatively explained by the effect of nonsense-mediated decay. Our analysis allows us to estimate that about 50% of frame-shifted coding transcripts are targeted by nonsense-mediated decay. Second, we show that a simple physical model that assumes that the splicing machinery stochastically binds to nearby splice sites in proportion to the affinities of the sites correctly predicts the relative abundances of different small length variations at both boundaries. Finally, using the same simple physical model, we show that for NAGNAG sites, the difference in affinities of the neighboring sites for the splicing machinery accurately predicts whether splicing will occur only at the first site, splicing will occur only at the second site, or three-nucleotide splice variants are likely to occur. Our analysis thus suggests that small exon length variations are the result of stochastic binding of the spliceosome at neighboring splice sites. Small exon length variations occur when there are nearby alternative splice sites that have similar affinity for the splicing machinery.

  8. Development and Validation of a Risk Model for Prediction of Hazardous Alcohol Consumption in General Practice Attendees: The PredictAL Study

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028

  9. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  10. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  11. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  12. Evaluation of Ocean Tide Models Used for Jason-2 Altimetry Corrections

    DEFF Research Database (Denmark)

    Fok, H.S.; Baki Iz, H.; Shum, C. K.

    2010-01-01

    It has been more than a decade since the last comprehensive accuracy assessment of global ocean tide models. Here, we conduct an evaluation of the barotropic ocean tide corrections, which were computed using FES2004 and GOT00.2, and other models on the Jason-2 altimetry Geophysical Data Record (GDR......), with a focus on selected coastal regions with energetic ocean dynamics. We compared nine historical and contemporary ocean tide models with pelagic tidal constants and with multiple satellite altimetry mission (T/P, ERS-1/-2, Envisat, GFO, Jason-1/-2) sea level anomalies using variance reduction studies.......All accuracy assessment methods show consistent results.We conclude that all the contemporary ocean tide models evaluated have similar performance in the selected coastal regions. However, their accuracies are region-dependent and overall are significantly worse than those in the deep-ocean, which are at the 2...

  13. Enhancing BEM simulations of a stalled wind turbine using a 3D correction model

    Science.gov (United States)

    Bangga, Galih; Hutomo, Go; Syawitri, Taurista; Kusumadewi, Tri; Oktavia, Winda; Sabila, Ahmad; Setiadi, Herlambang; Faisal, Muhamad; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus; Bumi, Ilmi

    2018-03-01

    Nowadays wind turbine rotors are usually employed with pitch control mechanisms to avoid deep stall conditions. Despite that, wind turbines often operate under pitch fault situation causing massive flow separation to occur. Pure Blade Element Momentum (BEM) approaches are not designed for this situation and inaccurate load predictions are already expected. In the present studies, BEM predictions are improved through the inclusion of a stall delay model for a wind turbine rotor operating under pitch fault situation of -2.3° towards stall. The accuracy of the stall delay model is assessed by comparing the results with available Computational Fluid Dynamics (CFD) simulations data.

  14. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  15. Supine Lateral Bending Radiographs Predict the Initial In-brace Correction of the Providence Brace in Patients With Adolescent Idiopathic Scoliosis

    DEFF Research Database (Denmark)

    Ohrt-Nissen, Søren; Hallager, Dennis Winge; Gehrchen, Poul Martin

    2016-01-01

    are used to assess curve flexibility in patients undergoing surgical treatment for adolescent idiopathic scoliosis (AIS). A low rate of in-brace correction (IBC) has been associated with a higher risk of curve progression, but to what extent SLBR can be used to predict IBC before initiating bracing...

  16. A model-based scatter artifacts correction for cone beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Wei; Zhu, Jun; Wang, Luyao [Department of Biomedical Engineering, Huazhong University of Science and Technology, Hubei 430074 (China); Vernekohl, Don; Xing, Lei, E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2016-04-15

    Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain or projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection

  17. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Directory of Open Access Journals (Sweden)

    Mindy M Syfert

    Full Text Available Species distribution models (SDMs trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a choosing to correct for geographical sampling bias and (b using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt. In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  18. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  19. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  1. CORRECTION OF FAULTY LINES IN MUSCLE MODEL, TO BE USED IN 3D BUILDING NETWORK CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. R. Karas

    2012-07-01

    Full Text Available This paper describes the usage of MUSCLE (Multidirectional Scanning for Line Extraction Model for automatic generation of 3D networks in CityGML format (from raster floor plans. MUSCLE (Multidirectional Scanning for Line Extraction Model is a conversion method which was developed to vectorize the straight lines through the raster images including floor plans, maps for GIS, architectural drawings, and machine plans. The model allows user to define specific criteria which are crucial for acquiring the vectorization process. Unlike traditional vectorization process, this model generates straight lines based on a line thinning algorithm, without performing line following-chain coding and vector reduction stages. In this method the nearly vertical lines were obtained by scanning the images horizontally, while the nearly horizontal lines were obtained by scanning the images vertically. In a case where two or more consecutive lines are nearly horizontal or nearly vertical, raster data become unmanageable and the process generates wrongly vectorized lines. In this situation, to obtain the precise lines, the image with the wrongly vectorized lines is diagonally scanned. By using MUSCLE model, the network models are topologically structured in CityGML format. After the generation process, it is possible to perform 3D network analysis based on these models. Then, by using the software that was designed based on the generated models, a geodatabase of the models could be established. This paper presents the correction application in MUSCLE and explains 3D network construction in detail.

  2. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  3. Precipitation by a regional climate model and bias correction in Europe and South Asia

    Energy Technology Data Exchange (ETDEWEB)

    Dobler, A.; Ahrens, B. [Inst. for Atmospheric and Environmental Sciences, Goethe-Univ., Frankfurt am Main (Germany)

    2008-08-15

    Because coarse-grid global circulation models do not allow for regional estimates of the water balance or trends of extreme precipitation, downscaling of global simulations is necessary to generate regional precipitation. This paper applies for downscaling the regional climate model CLM as a dynamical downscaling method (DDM) and two statistical downscaling methods (SDMs). Because the SDMs neglect information available to the DDM, and vice versa, a combination of the dynamical and statistical approaches is proposed here. In this combined approach, a simple statistical step is carried out to correct for the regional model biases in the dynamically downscaled simulations. To test the proposed methods, coarse-grid global re-analysis data (ERA40 with {proportional_to}1.125 grid spacing) is downscaled in two regions with different climatology and orography: one in South Asia and the other in Europe. All of the methods are tested on daily precipitation with 0.5 grid spacing. The SDMs are generally successful: the standardized root mean square error of rain day intensity is reduced from ERA40's 0.16 to 0.10 in a test area to the west of the European Alps. The CLM simulations perform less well (with a corresponding error of 0.14), but represent a promising approach if the user requires flexibility and independence from observational data. The proposed bias correction of the CLM simulations performs very well in European test areas (better than or at least comparable with the SDMs; i.e., with a corresponding error of 0.07), but fails in South Asia. An investigation of the observed and simulated precipitation climate in the test areas shows a strong dependence of the bias correction performance on sampling statistics (i.e., rain day frequency) and on the robustness of bias estimation. (orig.)

  4. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...

  5. Predicting erectile dysfunction following surgical correction of Peyronie's disease without inflatable penile prosthesis placement: vascular assessment and preoperative risk factors.

    Science.gov (United States)

    Taylor, Frederick L; Abern, Michael R; Levine, Laurence A

    2012-01-01

    Surgical therapy remains the gold standard treatment for Peyronie's Disease (PD). Surgical options include plication, grafting, and placement of inflatable penile prosthesis (IPP). Postoperative erectile dysfunction (ED) is a potential complication for PD surgery without IPP. We present our large series follow-up to evaluate preoperative risk factors for postoperative ED. The aim of this study is to evaluate preoperative risk factors for the development of ED following surgical correction of PD taking into account the degree of curvature, graft size, surgical approach, hypertension, hyperlipidemia, diabetes, smoking history, preoperative use of phosphodiesterase 5 inhibitors (PDE5), and preoperative duplex ultrasound findings including peak systolic and end diastolic velocities and resistive index. We identified 218 men undergoing either tunica albuginea plication (TAP) or partial plaque excision with pericardial grafting for PD following a previously published algorithm between November 1992 and April 2007. Preoperative and postoperative erectile function, curvature characteristics, presence of vascular risk factors, and duplex ultrasound findings were available on 109 patients. Our primary outcome measure is the development of ED after surgery for PD. Ten percent of TAP and 21% of plaque excision with grafting patients developed postoperative ED. Neither curve direction (P = 0.76), graft area (P = 0.78), surgical approach (P = 0.12), chronic hypertension (P = 0.51), hyperlipidemia (P = 0.87), diabetes (P = 0.69), nor smoking history (P = 0.99) were significant predictors of postoperative ED. No combination of risk factors was found to be predictive of postoperative ED. Preoperative use of PDE5 was not a significant predictor of postoperative ED (P = 0.33). Neither peak systolic, end diastolic, nor resistive index were significant predictors of ED (P = 0.28, 0.28, and 0.25, respectively). This long-term follow-up of a large published series suggests that neither

  6. Comments and corrections on 3D modeling studies of locomotor muscle moment arms in archosaurs

    Directory of Open Access Journals (Sweden)

    Karl Bates

    2015-10-01

    Full Text Available In a number of recent studies we used computer modeling to investigate the evolution of muscle leverage (moment arms and function in extant and extinct archosaur lineages (crocodilians, dinosaurs including birds and pterosaurs. These studies sought to quantify the level of disparity and convergence in muscle moment arms during the evolution of bipedal and quadrupedal posture in various independent archosaur lineages, and in doing so further our understanding of changes in anatomy, locomotion and ecology during the group’s >250 million year evolutionary history. Subsequent work by others has led us to re-evaluate our models, which revealed a methodological error that impacted on the results obtained from the abduction–adduction and long-axis rotation moment arms in our published studies. In this paper we present corrected abduction–adduction and long axis rotation moment arms for all our models, and evaluate the impact of this new data on the conclusions of our previous studies. We find that, in general, our newly corrected data differed only slightly from that previously published, with very few qualitative changes in muscle moments (e.g., muscles originally identified as abductors remained abductors. As a result the majority of our previous conclusions regarding the functional evolution of key muscles in these archosaur groups are upheld.

  7. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  8. Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods

    Science.gov (United States)

    Teutschbein, Claudia; Seibert, Jan

    2012-08-01

    SummaryDespite the increasing use of regional climate model (RCM) simulations in hydrological climate-change impact studies, their application is challenging due to the risk of considerable biases. To deal with these biases, several bias correction methods have been developed recently, ranging from simple scaling to rather sophisticated approaches. This paper provides a review of available bias correction methods and demonstrates how they can be used to correct for deviations in an ensemble of 11 different RCM-simulated temperature and precipitation series. The performance of all methods was assessed in several ways: At first, differently corrected RCM data was compared to observed climate data. The second evaluation was based on the combined influence of corrected RCM-simulated temperature and precipitation on hydrological simulations of monthly mean streamflow as well as spring and autumn flood peaks for five catchments in Sweden under current (1961-1990) climate conditions. Finally, the impact on hydrological simulations based on projected future (2021-2050) climate conditions was compared for the different bias correction methods. Improvement of uncorrected RCM climate variables was achieved with all bias correction approaches. While all methods were able to correct the mean values, there were clear differences in their ability to correct other statistical properties such as standard deviation or percentiles. Simulated streamflow characteristics were sensitive to the quality of driving input data: Simulations driven with bias-corrected RCM variables fitted observed values better than simulations forced with uncorrected RCM climate variables and had more narrow variability bounds.

  9. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  10. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  11. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  12. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  13. Anatomical Cystocele Recurrence: Development and Internal Validation of a Prediction Model.

    Science.gov (United States)

    Vergeldt, Tineke F M; van Kuijk, Sander M J; Notten, Kim J B; Kluivers, Kirsten B; Weemhoff, Mirjam

    2016-02-01

    To develop a prediction model that estimates the risk of anatomical cystocele recurrence after surgery. The databases of two multicenter prospective cohort studies were combined, and we performed a retrospective secondary analysis of these data. Women undergoing an anterior colporrhaphy without mesh materials and without previous pelvic organ prolapse (POP) surgery filled in a questionnaire, underwent translabial three-dimensional ultrasonography, and underwent staging of POP preoperatively and postoperatively. We developed a prediction model using multivariable logistic regression and internally validated it using standard bootstrapping techniques. The performance of the prediction model was assessed by computing indices of overall performance, discriminative ability, calibration, and its clinical utility by computing test characteristics. Of 287 included women, 149 (51.9%) had anatomical cystocele recurrence. Factors included in the prediction model were assisted delivery, preoperative cystocele stage, number of compartments involved, major levator ani muscle defects, and levator hiatal area during Valsalva. Potential predictors that were excluded after backward elimination because of high P values were age, body mass index, number of vaginal deliveries, and family history of POP. The shrinkage factor resulting from the bootstrap procedure was 0.91. After correction for optimism, Nagelkerke's R and the Brier score were 0.15 and 0.22, respectively. This indicates satisfactory model fit. The area under the receiver operating characteristic curve of the prediction model was 71.6% (95% confidence interval 65.7-77.5). After correction for optimism, the area under the receiver operating characteristic curve was 69.7%. This prediction model, including history of assisted delivery, preoperative stage, number of compartments, levator defects, and levator hiatus, estimates the risk of anatomical cystocele recurrence.

  14. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  15. Prevalence dependent calibration of a predictive model for nasal carriage of methicillin-resistant Staphylococcus aureus.

    Science.gov (United States)

    Elias, Johannes; Heuschmann, Peter U; Schmitt, Corinna; Eckhardt, Frithjof; Boehm, Hartmut; Maier, Sebastian; Kolb-Mäurer, Annette; Riedmiller, Hubertus; Müllges, Wolfgang; Weisser, Christoph; Wunder, Christian; Frosch, Matthias; Vogel, Ulrich

    2013-02-28

    Published models predicting nasal colonization with Methicillin-resistant Staphylococcus aureus among hospital admissions predominantly focus on separation of carriers from non-carriers and are frequently evaluated using measures of discrimination. In contrast, accurate estimation of carriage probability, which may inform decisions regarding treatment and infection control, is rarely assessed. Furthermore, no published models adjust for MRSA prevalence. Using logistic regression, a scoring system (values from 0 to 200) predicting nasal carriage of MRSA was created using a derivation cohort of 3091 individuals admitted to a European tertiary referral center between July 2007 and March 2008. The expected positive predictive value of a rapid diagnostic test (GeneOhm, Becton & Dickinson Co.) was modeled using non-linear regression according to score. Models were validated on a second cohort from the same hospital consisting of 2043 patients admitted between August 2008 and January 2012. Our suggested correction score for prevalence was proportional to the log-transformed odds ratio between cohorts. Calibration before and after correction, i.e. accurate classification into arbitrary strata, was assessed with the Hosmer-Lemeshow-Test. Treating culture as reference, the rapid diagnostic test had positive predictive values of 64.8% and 54.0% in derivation and internal validation corhorts with prevalences of 2.3% and 1.7%, respectively. In addition to low prevalence, low positive predictive values were due to high proportion (> 66%) of mecA-negative Staphylococcus aureus among false positive results. Age, nursing home residence, admission through the medical emergency department, and ICD-10-GM admission diagnoses starting with "A" or "J" were associated with MRSA carriage and were thus included in the scoring system, which showed good calibration in predicting probability of carriage and the rapid diagnostic test's expected positive predictive value. Calibration for both

  16. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  17. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  18. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    Science.gov (United States)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  19. Visual Predictive Check in Models with Time-Varying Input Function.

    Science.gov (United States)

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  20. Off-the-job training for VATS employing anatomically correct lung models.

    Science.gov (United States)

    Obuchi, Toshiro; Imakiire, Takayuki; Miyahara, Sou; Nakashima, Hiroyasu; Hamanaka, Wakako; Yanagisawa, Jun; Hamatake, Daisuke; Shiraishi, Takeshi; Moriyama, Shigeharu; Iwasaki, Akinori

    2012-02-01

    We evaluated our simulated major lung resection employing anatomically correct lung models as "off-the-job training" for video-assisted thoracic surgery trainees. A total of 76 surgeons voluntarily participated in our study. They performed video-assisted thoracic surgical lobectomy employing anatomically correct lung models, which are made of sponges so that vessels and bronchi can be cut using usual surgical techniques with typical forceps. After the simulation surgery, participants answered questionnaires on a visual analogue scale, in terms of their level of interest and the reality of our training method as off-the-job training for trainees. We considered that the closer a score was to 10, the more useful our method would be for training new surgeons. Regarding the appeal or level of interest in this simulation surgery, the mean score was 8.3 of 10, and regarding reality, it was 7.0. The participants could feel some of the real sensations of the surgery and seemed to be satisfied to perform the simulation lobectomy. Our training method is considered to be suitable as an appropriate type of surgical off-the-job training.

  1. Theoretical Model of God: The Key to Correct Exploration of the Universe

    Science.gov (United States)

    Kalanov, Temur Z.

    2007-04-01

    The problem of the correct approach to exploration of the Universe cannot be solved if there is no solution of the problem of existence of God (Creator, Ruler) in science. In this connection, theoretical proof of existence of God is proposed. The theoretical model of God -- as scientific proof of existence of God -- is the consequence of the system of the formulated axioms. The system of the axioms contains, in particular, the following premises: (1) all objects formed (synthesized) by man are characterized by the essential property: namely, divisibility into aspects; (2) objects which can be mentally divided into aspects are objects formed (synthesized); (3) the system ``Universe'' is mentally divided into aspects. Consequently, the Universe represents the system formed (synthesized); (4) the theorem of existence of God (i.e. Absolute, Creator, Ruler) follows from the principle of logical completeness of system of concepts: if the formed (synthesized) system ``Universe'' exists, then God exists as the Absolute, the Creator, the Ruler of essence (i.e. information) and phenomenon (i.e. material objects). Thus, the principle of existence of God -- the content of the theoretical model of God -- must be a starting-point and basis of correct gnosiology and science of 21 century.

  2. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  3. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  4. Monthly to seasonal low flow prediction: statistical versus dynamical models

    Science.gov (United States)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    While the societal and economical impacts of floods are well documented and assessable, the impacts of lows flows are less studied and sometimes overlooked. For example, over the western part of Europe, due to intense inland waterway transportation, the economical loses due to low flows are often similar compared to the ones due to floods. In general, the low flow aspect has the tendency to be underestimated by the scientific community. One of the best examples in this respect is the facts that at European level most of the countries have an (early) flood alert system, but in many cases no real information regarding the development, evolution and impacts of droughts. Low flows, occurring during dry periods, may result in several types of problems to society and economy: e.g. lack of water for drinking, irrigation, industrial use and power production, deterioration of water quality, inland waterway transport, agriculture, tourism, issuing and renewing waste disposal permits, and for assessing the impact of prolonged drought on aquatic ecosystems. As such, the ever-increasing demand on water resources calls for better a management, understanding and prediction of the water deficit situation and for more reliable and extended studies regarding the evolution of the low flow situations. In order to find an optimized monthly to seasonal forecast procedure for the German waterways, the Federal Institute of Hydrology (BfG) is exploring multiple approaches at the moment. On the one hand, based on the operational short- to medium-range forecasting chain, existing hydrological models are forced with two different hydro-meteorological inputs: (i) resampled historical meteorology generated by the Ensemble Streamflow Prediction approach and (ii) ensemble (re-) forecasts of ECMWF's global coupled ocean-atmosphere general circulation model, which have to be downscaled and bias corrected before feeding the hydrological models. As a second approach BfG evaluates in cooperation with

  5. Tensor-Based Quality Prediction for Building Model Reconstruction from LIDAR Data and Topographic Map

    Science.gov (United States)

    Lin, B. C.; You, R. J.

    2012-08-01

    A quality prediction method is proposed to evaluate the quality of the automatic reconstruction of building models. In this study, LiDAR data and topographic maps are integrated for building model reconstruction. Hence, data registration is a critical step for data fusion. To improve the efficiency of the data fusion, a robust least squares method is applied to register boundary points extracted from LiDAR data and building outlines obtained from topographic maps. After registration, a quality indicator based on the tensor analysis of residuals is derived in order to evaluate the correctness of the automatic building model reconstruction. Finally, an actual dataset demonstrates the quality of the predictions for automatic model reconstruction. The results show that our method can achieve reliable results and save both time and expense on model reconstruction.

  6. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  7. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  8. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  9. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  10. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  11. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  12. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  13. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  14. Simple Decision-Analytic Functions of the AUC for Ruling Out a Risk Prediction Model and an Added Predictor.

    Science.gov (United States)

    Baker, Stuart G

    2018-02-01

    When using risk prediction models, an important consideration is weighing performance against the cost (monetary and harms) of ascertaining predictors. The minimum test tradeoff (MTT) for ruling out a model is the minimum number of all-predictor ascertainments per correct prediction to yield a positive overall expected utility. The MTT for ruling out an added predictor is the minimum number of added-predictor ascertainments per correct prediction to yield a positive overall expected utility. An approximation to the MTT for ruling out a model is 1/[P (H(AUC model )], where H(AUC) = AUC - {½ (1-AUC)} ½ , AUC is the area under the receiver operating characteristic (ROC) curve, and P is the probability of the predicted event in the target population. An approximation to the MTT for ruling out an added predictor is 1 /[P {(H(AUC Model:2 ) - H(AUC Model:1 )], where Model 2 includes an added predictor relative to Model 1. The latter approximation requires the Tangent Condition that the true positive rate at the point on the ROC curve with a slope of 1 is larger for Model 2 than Model 1. These approximations are suitable for back-of-the-envelope calculations. For example, in a study predicting the risk of invasive breast cancer, Model 2 adds to the predictors in Model 1 a set of 7 single nucleotide polymorphisms (SNPs). Based on the AUCs and the Tangent Condition, an MTT of 7200 was computed, which indicates that 7200 sets of SNPs are needed for every correct prediction of breast cancer to yield a positive overall expected utility. If ascertaining the SNPs costs $500, this MTT suggests that SNP ascertainment is not likely worthwhile for this risk prediction.

  15. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss.

    Science.gov (United States)

    Sun, Pengfei; Qin, Jun; Campbell, Kathleen

    2015-01-01

    Noise induced hearing loss (NIHL) remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL) caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL) and complex velocity level (CVL), which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL) filter to obtain velocities of basilar membrane (BM) in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  16. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    2015-01-01

    Full Text Available Noise induced hearing loss (NIHL remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL and complex velocity level (CVL, which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL filter to obtain velocities of basilar membrane (BM in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  17. A multivariate-based conflict prediction model for a Brazilian freeway.

    Science.gov (United States)

    Caleffi, Felipe; Anzanello, Michel José; Cybis, Helena Beatriz Bettella

    2017-01-01

    Real-time collision risk prediction models relying on traffic data can be useful in dynamic management systems seeking at improving traffic safety. Models have been proposed to predict crash occurrence and collision risk in order to proactively improve safety. This paper presents a multivariate-based framework for selecting variables for a conflict prediction model on the Brazilian BR-290/RS freeway. The Bhattacharyya Distance (BD) and Principal Component Analysis (PCA) are applied to a dataset comprised of variables that potentially help to explain occurrence of traffic conflicts; the parameters yielded by such multivariate techniques give rise to a variable importance index that guides variables removal for later selection. Next, the selected variables are inserted into a Linear Discriminant Analysis (LDA) model to estimate conflict occurrence. A matched control-case technique is applied using traffic data processed from surveillance cameras at a segment of a Brazilian freeway. Results indicate that the variables that significantly impacted on the model are associated to total flow, difference between standard deviation of lanes' occupancy, and the speed's coefficient of variation. The model allowed to asses a characteristic behavior of major Brazilian's freeways, by identifying the Brazilian typical heterogeneity of traffic pattern among lanes, which leads to aggressive maneuvers. Results also indicate that the developed LDA-PCA model outperforms the LDA-BD model. The LDA-PCA model yields average 76% classification accuracy, and average 87% sensitivity (which measures the rate of conflicts correctly predicted). Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Numerical estimate of the finite-size corrections to the free energy of the Sherrington-Kirkpatrick model using Guerra-Toninelli interpolation

    Science.gov (United States)

    Billoire, Alain

    2006-04-01

    I use an interpolation formula, introduced recently by Guerra and Toninelli, in order to prove the existence of the free energy of the Sherrington-Kirkpatrick spin glass model in the infinite volume limit, to investigate numerically the finite-size corrections to the free energy of this model. The results are compatible with a (1/12N)ln(N/N0) behavior at Tc , as predicted by Parisi, Ritort, and Slanina, and a 1/N2/3 behavior below Tc .

  19. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  20. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  1. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  2. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  3. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  4. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  5. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  6. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  7. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  8. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  9. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  10. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  11. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  12. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  13. LHC phenomenology and higher order electroweak corrections in supersymmetric models with and without R-parity

    International Nuclear Information System (INIS)

    Liebler, Stefan Rainer

    2011-09-01

    The standard model of particle physics lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. Using lepton number violating terms in the context of bilinear R-parity violation and the μνSSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. This thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and leptons in

  14. LHC phenomenology and higher order electroweak corrections in supersymmetric models with and without R-parity

    Energy Technology Data Exchange (ETDEWEB)

    Liebler, Stefan Rainer

    2011-09-15

    The standard model of particle physics lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. Using lepton number violating terms in the context of bilinear R-parity violation and the {mu}{nu}SSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. This thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and

  15. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  16. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  17. Predicting motor development in very preterm infants at 12 months' corrected age: the role of qualitative magnetic resonance imaging and general movements assessments.

    Science.gov (United States)

    Spittle, Alicia J; Boyd, Roslyn N; Inder, Terrie E; Doyle, Lex W

    2009-02-01

    The objective of this study was to compare the predictive value of qualitative MRI of brain structure at term and general movements assessments at 1 and 3 months' corrected age for motor outcome at 1 year's corrected age in very preterm infants. Eighty-six very preterm infants (Motor outcome at 1 year's corrected age was evaluated with the Alberta Infant Motor Scale, the Neuro-Sensory Motor Development Assessment, and the diagnosis of cerebral palsy by the child's pediatrician. At 1 year of age, the Alberta Infant Motor Scale categorized 30 (35%) infants as suspicious/abnormal; the Neuro-Sensory Motor Development Assessment categorized 16 (18%) infants with mild-to-severe motor dysfunction, and 5 (6%) infants were classified with cerebral palsy. White matter abnormality at term and general movements at 1 and 3 months significantly correlated with Alberta Infant Motor Scale and Neuro-Sensory Motor Development Assessment scores at 1 year. White matter abnormality and general movements at 3 months were the only assessments that correlated with cerebral palsy. All assessments had 100% sensitivity in predicting cerebral palsy. White matter abnormality demonstrated the greatest accuracy in predicting combined motor outcomes, with excellent levels of specificity (>90%); however, the sensitivity was low. On the other hand, general movements assessments at 1 month had the highest sensitivity (>80%); however, the overall accuracy was relatively low. Neuroimaging (MRI) and functional (general movements) examinations have important complementary roles in predicting motor development of very preterm infants.

  18. Real-time GPS Satellite Clock Error Prediction Based On No-stationary Time Series Model

    Science.gov (United States)

    Wang, Q.; Xu, G.; Wang, F.

    2009-04-01

    Analysis Centers of the IGS provide precise satellite ephemeris for GPS data post-processing. The accuracy of orbit products is better than 5cm, and that of the satellite clock errors (SCE) approaches 0.1ns (igscb.jpl.nasa.gov), which can meet with the requirements of precise point positioning (PPP). Due to the 13 day-latency of the IGS final products, only the broadcast ephemeris and IGS ultra rapid products (predicted) are applicable for real time PPP (RT-PPP). Therefore, development of an approach to estimate high precise GPS SCE in real time is of particular importance for RT-PPP. Many studies have been carried out for forecasting the corrections using models, such as Linear Model (LM), Quadratic Polynomial Model (QPM), Quadratic Polynomial Model with Cyclic corrected Terms (QPM+CT), Grey Model (GM) and Kalman Filter Model (KFM), etc. However, the precisions of these models are generally in nanosecond level. The purpose of this study is to develop a method using which SCE forecasting for RT-PPP can be reached with a precision of sub-nanosecond. Analysis of the last 8 years IGS SCE data shown that predicted precision depend on the stability of the individual satellite clock. The clocks of the most recent GPS satellites (BLOCK IIR and BLOCK IIR-M) are more stable than that of the former GPS satellites (BLOCK IIA). For the stable satellite clock, the next 6 hours SCE can be easily predict with LM. The residuals of unstable satellite clocks are periodic ones with noise components. Dominant periods of residuals are found by using Fourier Transform and Spectrum Analysis. For the rest part of the residuals, an auto-regression model is used to determine their systematic trends. Summarized from this study, a no-stationary time series model can be proposed to predict GPS SCE in real time. This prediction model includes: linear term, cyclic corrected terms and auto-regression term, which are used to represent SCE trend, cyclic parts and rest of the errors, respectively

  19. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  20. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  1. Correction of the angular dependence of satellite retrieved LST at global scale using parametric models

    Science.gov (United States)

    Ermida, S. L.; Trigo, I. F.; DaCamara, C.; Ghent, D.

    2017-12-01

    Land surface temperature (LST) values retrieved from satellite measurements in the thermal infrared (TIR) may be strongly affected by spatial anisotropy. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a parametric model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. Two models are consistently analyzed to evaluate their performance of and to assess their respective potential to correct directional effects on LST for a wide range of surface conditions, in terms of tree coverage, vegetation density, surface emissivity. We also propose an optimization of the correction of directional effects through a synergistic use of both models. The models are calibrated using LST data as provided by two sensors: MODIS on-board NASA's TERRA and AQUA; and SEVIRI on-board EUMETSAT's MSG. As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The models are then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based models is very close to the pixel based ones. Furthermore, the reduced number of parameters allows improving the model trough the incorporation of a

  2. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  3. A Model of the Smooth Pursuit Eye Movement with Prediction and Learning

    Directory of Open Access Journals (Sweden)

    Davide Zambrano

    2010-01-01

    Full Text Available Smooth pursuit is one of the five main eye movements in humans, consisting of tracking a steadily moving visual target. Smooth pursuit is a good example of a sensory-motor task that is deeply based on prediction: tracking a visual target is not possible by correcting the error between the eye and the target position or velocity with a feedback loop, but it is only possible by predicting the trajectory of the target. This paper presents a model of smooth pursuit based on prediction and learning. It starts from amodel of the neuro-physiological system proposed by Shibata and Schaal (Shibata et al., Neural Networks, vol. 18, pp. 213-224, 2005. The learning component added here decreases the prediction time in the case of target dynamics already experienced by the system. In the implementation described here, the convergence time is, after the learning phase, 0.8 s.

  4. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  5. Density-corrected models for gas diffusivity and air permeability in unsaturated soil

    DEFF Research Database (Denmark)

    Deepagoda Thuduwe Kankanamge Kelum, Chamindu; Møldrup, Per; Schjønning, Per

    2011-01-01

    profile data (total of 150 undisturbed soil samples) were used to investigate soil type and density effects on the gas transport parameters and for model development. The measurements were within a given range of matric potentials (-10 to -500 cm H2O) typically representing natural field conditions...... in subsurface soil. The data were regrouped into four categories based on compaction (total porosity F 0.4 m3 m-3) and soil texture (volume-based content of clay, silt, and organic matter 15%). The results suggested that soil compaction more than soil type was the major control on gas...... diffusivity and to some extent also on air permeability. We developed a density-corrected (D-C) Dp(e)/Do model as a generalized form of a previous model for Dp/ Do at -100 cm H2O of matric potential (Dp,100/Do). The D-C model performed well across soil types and density levels compared with existing models...

  6. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  7. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  8. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  9. Comparison of Predictive Models for the Early Diagnosis of Diabetes.

    Science.gov (United States)

    Jahani, Meysam; Mahdavi, Mahdi

    2016-04-01

    This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes.

  10. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  11. Hyperopic refractive correction by LASIK, SMILE or lenticule reimplantation in a non-human primate model.

    Science.gov (United States)

    Williams, Geraint P; Wu, Benjamin; Liu, Yu Chi; Teo, Ericia; Nyein, Chan L; Peh, Gary; Tan, Donald T; Mehta, Jodhbir S

    2018-01-01

    Hyperopia is a common refractive error, apparent in 25% of Europeans. Treatments include spectacles, contact lenses, laser interventions and surgery including implantable contact lenses and lens extraction. Laser treatment offers an expedient and reliable means of correcting ametropia. LASIK is well-established however SMILE (small-incision lenticule extraction) or lenticule implantation (derived from myopic laser-correction) are newer options. In this study we compared the outcomes of hyperopic LASIK, SMILE and lenticule re-implantation in a primate model at +2D/+4D treatment. While re-implantation showed the greatest regression, broadly comparable refractive results were seen at 3-months with SMILE and LASIK (<1.4D of intended), but a greater tendency to regression in +2D lenticule reimplantation. Central corneal thickness showed greater variation at +2D treatment, but central thickening during lenticule reimplantation at +4D treatment was seen (-17± 27μm LASIK, -45 ± 18μm SMILE and 28 ± 17μm Re-implantation; p <0.01) with expected paracentral thinning following SMILE. Although in vivo confocal microscopy appeared to show higher reflectivity in all +4D treatment groups, there were minimal and inconsistent changes in inflammatory responses between modalities. SMILE and lenticule re-implantation may represent a safe and viable method for treating hyperopia, but further optimization for lower hyperopic treatments is warranted.

  12. Revised Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report - February 2018.

    Energy Technology Data Exchange (ETDEWEB)

    Copland, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    The U.S. Department of Energy (DOE) and the management and operating (M&O) contractor for Sandia National Laboratories beginning on May 1, 2017, National Technology & Engineering Solutions of Sandia, LLC (NTESS), hereinafter collectively referred to as DOE/NTESS, prepared this Revised Tijeras Arroyo Groundwater Current Conceptual Model (CCM) and Corrective Measures Evaluation (CME) Report , referred to as the Revised CCM/CME Report, to meet requirements under the Sandia National Laboratories-New Mexico (SNL/NM) Compliance Order on Consent (Consent Order). The Consent Order became effective on April 29, 2004. The Consent Order identifies the Tijeras Arroyo Groundwater (TAG) Area of Concern (AOC) as an area of groundwater contamination requiring further characterization and corrective action. In November 2004, New Mexico Environment Department (NMED) approved the July 2004 CME Work Plan. In April 2005, DOE and the SNL M&O contractor at the time, Sandia Corporation (Sandia), hereinafter collectively referred to as DOE/Sandia, submitted a CME Report, but NMED did not finalize review of that document. In December 2016, DOE/Sandia submitted a combined and updated CCM/CME Report. NMED issued a disapproval letter in May 2017 that included comments on the December 2016 CCM/CME Report. In August 2017, NMED and DOE/NTESS staff held a meeting to discuss and clarify outstanding issues. This Revised CCM/CME Report addresses (1) the issues presented in the NMED May 2017 disapproval letter and (2) findings from the August 2017 meeting.

  13. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  14. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  15. A hybrid downscaling using statistical correction and high resolution regional climate model information

    Science.gov (United States)

    Wakazuki, Y.

    2017-12-01

    The author presented the outline of a statistical downscaling method using high resolution regional climate model simulation results, which is called hybrid-downscaling, at AGU fall meeting 2016. This presentation is the extension. The statistical downscaling is calculated with lighter computational costs for various patterns of climate states in future which are needed to estimate uncertainty of regional climate change. However, the estimation accuracy is low in the region where the density of observation is low. On the other hand, dynamical downscaling using regional climate model (RCM) use huge computational costs. However, climatological features are well reproduced even in the region where the density of observation is low. I proposed a method to compensate the disadvantages of statistical and dynamical downscaling methods in the hybrid-downscaling. The downscaling processes are divided into horizontal interpolation (HI) and bias correction (BC). In HI, middle-resolution multi-RCM simulation results are interpolated to high resolution data which has grid sizes of 1-2 km. The HI model for climatological variables such as mean precipitation and temperature is learned by using the high resolution dynamical downscaling result. In BC, correction ratio/difference of high-resolution data are estimated by a generalized linear model with predictors of geographical elements. In this method, spatial distribution is largely influenced by the high-resolution RCM result. The hybrid-downscaling model has been applied for regional climate model simulation with the target region around Japan. Multiple future climate simulations had been performed to cover the uncertainty with 24 and 6 km grid sizes. However, only two climate simulations had been calculated with 2 km grid size because of huge computational costs. To estimate 2 km grid information, two kinds of hybrid-downscaling, in which 24 and 6 km RCM results were used as middle-resolution RCMs, were performed. The

  16. Short-Term Wind Speed Hybrid Forecasting Model Based on Bias Correcting Study and Its Application

    OpenAIRE

    Mingfei Niu; Shaolong Sun; Jie Wu; Yuanlei Zhang

    2015-01-01

    The accuracy of wind speed forecasting is becoming increasingly important to improve and optimize renewable wind power generation. In particular, reliable short-term wind speed forecasting can enable model predictive control of wind turbines and real-time optimization of wind farm operation. However, due to the strong stochastic nature and dynamic uncertainty of wind speed, the forecasting of wind speed data using different patterns is difficult. This paper proposes a novel combination bias c...

  17. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  18. Validation of water sorption-based clay prediction models for calcareous soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Moosavi, Ali

    2017-01-01

    . The low organic carbon content of the soils and the low fraction of low-activity clay minerals like kaolinite suggested that the clay content under-predictions were due to large CaCO3 contents. Thus, for such water-sorption based models to work accurately for calcareous soils, a correction factor......Soil particle size distribution (PSD), particularly the active clay fraction, mediates soil engineering, agronomic and environmental functions. The tedious and costly nature of traditional methods of determining PSD prompted the development of water sorption-based models for determining the clay...... fraction. The applicability of such models to semi-arid soils with significant amounts of calcium carbonate and/or gypsum is unknown. The objective of this study was to validate three water sorption-based clay prediction models for 30 calcareous soils from Iran and identify the effect of CaCO3...

  19. The use of model-test data for predicting full-scale ACV resistance

    Science.gov (United States)

    Forstell, B. G.; Harry, C. W.

    The paper summarizes the analysis of test data obtained with a 1/12-scale model of the Amphibious Assault Landing Craft (AALC) JEFF(B). The analysis was conducted with the objective of improving the accuracy of drag predictions for a JEFF(B)-type air-cushion vehicle (ACV). Model test results, scaled to full-scale, are compared with full-scale drag obtained in various sea states during JEFF(B) trials. From the results of this comparison, it is found that the Froude-scale model rough-water drag data is consistently greater than full-scale derived drag, and is a function of both wave height and craft forward speed. Results are presented indicating that Froude scaling model data obtained in calm water also causes an over-prediction of calm-water drag at full-scale. An empirical correction that was developed for use on a JEFF(B)-type craft is discussed.

  20. On the Standard Model prediction for BR(B{s,d} to mu+ mu-)

    CERN Document Server

    Buras, Andrzej J.; Guadagnoli, Diego; Isidori, Gino

    2012-01-01

    The decay Bs to mu+ mu- is one of the milestones of the flavor program at the LHC. We reappraise its Standard Model prediction. First, by analyzing the theoretical rate in the light of its main parametric dependence, we highlight the importance of a complete evaluation of higher-order electroweak corrections, at present known only in the large-mt limit, and leaving sizable dependence on the definition of electroweak parameters. Using insights from a complete calculation of such corrections for K to pi bar{nu} nu decays, we find a scheme in which NLO electroweak corrections are likely to be negligible. Second, we address the issue of the correspondence between the initial and the final state detected by the experiments, and those used in the theoretical prediction. Particular attention is devoted to the effect of the soft radiation, that has not been discussed for this mode in the previous literature, and that can lead to O(10%) corrections to the decay rate. The "non-radiative" branching ratio (that is equiva...