WorldWideScience

Sample records for models predicting 12-step

  1. Genomic prediction in a nuclear population of layers using single-step models.

    Science.gov (United States)

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  2. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  3. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    Science.gov (United States)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  5. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  6. The importance of age composition of 12-step meetings as a moderating factor in the relation between young adults' 12-step participation and abstinence.

    Science.gov (United States)

    Labbe, Allison K; Greene, Claire; Bergman, Brandon G; Hoeppner, Bettina; Kelly, John F

    2013-12-01

    Participation in 12-step mutual help organizations (MHO) is a common continuing care recommendation for adults; however, little is known about the effects of MHO participation among young adults (i.e., ages 18-25 years) for whom the typically older age composition at meetings may serve as a barrier to engagement and benefits. This study examined whether the age composition of 12-step meetings moderated the recovery benefits derived from attending MHOs. Young adults (n=302; 18-24 years; 26% female; 94% White) enrolled in a naturalistic study of residential treatment effectiveness were assessed at intake, and 3, 6, and 12 months later on 12-step attendance, age composition of attended 12-step groups, and treatment outcome (Percent Days Abstinent [PDA]). Hierarchical linear models (HLM) tested the moderating effect of age composition on PDA concurrently and in lagged models controlling for confounds. A significant three-way interaction between attendance, age composition, and time was detected in the concurrent (p=0.002), but not lagged, model (b=0.38, p=0.46). Specifically, a similar age composition was helpful early post-treatment among low 12-step attendees, but became detrimental over time. Treatment and other referral agencies might enhance the likelihood of successful remission and recovery among young adults by locating and initially linking such individuals to age appropriate groups. Once engaged, however, it may be prudent to encourage gradual integration into the broader mixed-age range of 12-step meetings, wherein it is possible that older members may provide the depth and length of sober experience needed to carry young adults forward into long-term recovery. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Prediction of Optimal Daily Step Count Achievement from Segmented School Physical Activity

    Directory of Open Access Journals (Sweden)

    Ryan D. Burns

    2015-01-01

    Full Text Available Optimizing physical activity in childhood is needed for prevention of disease and for healthy social and psychological development. There is limited research examining how segmented school physical activity patterns relate to a child achieving optimal physical activity levels. The purpose of this study was to examine the predictive relationship between step counts during specific school segments and achieving optimal school (6,000 steps/day and daily (12,000 steps/day step counts in children. Participants included 1,714 school-aged children (mean age = 9.7±1.0 years recruited across six elementary schools. Physical activity was monitored for one week using pedometers. Generalized linear mixed effects models were used to determine the adjusted odds ratios (ORs of achieving both school and daily step count standards for every 1,000 steps taken during each school segment. The school segment that related in strongest way to a student achieving 6,000 steps during school hours was afternoon recess (OR = 40.03; P<0.001 and for achieving 12,000 steps for the entire day was lunch recess (OR = 5.03; P<0.001. School segments including lunch and afternoon recess play an important role for optimizing daily physical activity in children.

  8. Bag-of-steps : Predicting lower-limb fracture rehabilitation length

    NARCIS (Netherlands)

    Pla, Albert; López, Beatriz; Nogueira, Cristofor; Mordvaniuk, Natalia; Blokhuis, Taco J.; Holtslag, Herman R.

    2016-01-01

    This paper presents bag-of-steps, a new methodology to predict the rehabilitation length of a patient by monitoring the weight he is bearing in his injured leg and using a predictive model based on the bag-of-words technique. A force sensor is used to monitor and characterize the patient's gait,

  9. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    Science.gov (United States)

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  10. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    Science.gov (United States)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  11. A two step Bayesian approach for genomic prediction of breeding values.

    Science.gov (United States)

    Shariati, Mohammad M; Sørensen, Peter; Janss, Luc

    2012-05-21

    In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter. A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. The simulated data from the 15th QTL-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based on their estimated variance on the trait in step 1 and each 150 markers were assigned to one group with a common variance. In further analyses, subsets of 1500 and 450 markers with largest effects in step 2 were kept in the prediction model. Grouping markers outperformed SNP-BLUP model in terms of accuracy of predicted breeding values. However, the accuracies of predicted breeding values were lower than Bayesian methods with marker specific variances. Grouping markers is less flexible than allowing each marker to have a specific marker variance but, by grouping, the power to estimate marker variances increases. A prior knowledge of the genetic architecture of the trait is necessary for clustering markers and appropriate prior parameterization.

  12. Strengthening the working alliance through a clinician's familiarity with the 12-step approach.

    Science.gov (United States)

    Dennis, Cory B; Roland, Brian D; Loneck, Barry M

    2018-01-01

    The working alliance plays an important role in the substance use disorder treatment process. Many substance use disorder treatment providers incorporate the 12-Step approach to recovery into treatment. With the 12-Step approach known among many clients and clinicians, it may well factor into the therapeutic relationship. We investigated how, from the perspective of clients, a clinician's level of familiarity with and in-session time spent on the 12-Step approach might affect the working alliance between clients and clinicians, including possible differences based on a clinician's recovery status. We conducted a secondary study using data from 180 clients and 31 clinicians. Approximately 81% of client participants were male, and approximately 65% of clinician participants were female. We analyzed data with Stata using a population-averaged model. From the perspective of clients with a substance use disorder, clinicians' familiarity with the 12-Step approach has a positive relationship with the working alliance. The client-estimated amount of in-session time spent on the 12-Step approach did not have a statistically significant effect on ratings of the working alliance. A clinician's recovery status did not moderate the relationship between 12-Step familiarity and the working alliance. These results suggest that clinicians can influence, in part, how their clients perceive the working alliance by being familiar with the 12-Step approach. This might be particularly salient for clinicians who provide substance use disorder treatment at agencies that incorporate, on some level, the 12-Step approach to recovery.

  13. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  14. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  15. Comparison on genomic predictions using GBLUP models and two single-step blending methods with different relationship matrices in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Christensen, Ole Fredslund; Madsen, Per

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may...... not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16......) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted...

  16. Internal rotation for predicting conformational population of 1,2-difluorethane and 1,2-dichloroethane

    Energy Technology Data Exchange (ETDEWEB)

    Venâncio, Mateus F. [Laboratório de Química Computacional e Modelagem Molecular, Departamento de Química, ICEx, Universidade Federal de Minas Gerais, Campus Universitário, 31.270-901 Belo Horizonte, MG (Brazil); Dos Santos, Hélio F. [Núcleo de Estudos em Química Computacional (NEQC), Departamento de Química, ICE, Universidade Federal de Juiz de Fora (UFJF), Campus Universitário, Martelos, Juiz de Fora, MG 36036-330 (Brazil); De Almeida, Wagner B., E-mail: wbdealmeida@gmail.com [Laboratório de Química Computacional (LQC), Departamento de Química Inorgânica, Instituto de Química, Universidade Federal Fluminense, Campus do Valonguinho, Centro, Niterói, RJ CEP: 24020-141 (Brazil)

    2016-06-15

    Highlights: • Contribution of internal rotation to Gibbs free energy estimated using the quantum pendulum model. • Theoretical prediction of conformational population of 1,2-difluorethane and 1,2-dichloroethane. • The predicted populations are in excellent agreement with experimental gas phase data available. • QPM model account for low vibrational frequency modes effect on thermodynamic calculation. • Caution is needed when the RR–HO approach has to be used in conformational analysis studies. - Abstract: The contribution of internal rotation to the thermal correction of Gibbs free energy (ΔG) is estimated using the quantum pendulum model (QPM) to solve the characteristic Schrödinger equation. The procedure is applied to theoretical prediction of conformational population of 1,2-difluorethane (1,2-DFE) and 1,2-dichloroethane (1,2-DCE) molecules. The predicted population for the anti form was 37% and 75%, for 1,2-DFE and 1,2-DCE respectively, in excellent agreement with experimental gas phase data available, 37 ± 5% and 78 ± 5%. These results provide great support to the use of the QPM model to account for the low vibrational frequency modes effect on the calculation of thermodynamic properties.

  17. Hyaluronan and N-ERC/mesothelin as key biomarkers in a specific two-step model to predict pleural malignant mesothelioma.

    Science.gov (United States)

    Mundt, Filip; Nilsonne, Gustav; Arslan, Sertaç; Csürös, Karola; Hillerdal, Gunnar; Yildirim, Huseyin; Metintas, Muzaffer; Dobra, Katalin; Hjerpe, Anders

    2013-01-01

    Diagnosis of malignant mesothelioma is challenging. The first available diagnostic material is often an effusion and biochemical analysis of soluble markers may provide additional diagnostic information. This study aimed to establish a predictive model using biomarkers from pleural effusions, to allow early and accurate diagnosis. Effusions were collected prospectively from 190 consecutive patients at a regional referral centre. Hyaluronan, N-ERC/mesothelin, C-ERC/mesothelin, osteopontin, syndecan-1, syndecan-2, and thioredoxin were measured using ELISA and HPLC. A predictive model was generated and validated using a second prospective set of 375 effusions collected consecutively at a different referral centre. Biochemical markers significantly associated with mesothelioma were hyaluronan (odds ratio, 95% CI: 8.82, 4.82-20.39), N-ERC/mesothelin (4.81, 3.19-7.93), CERC/mesothelin (3.58, 2.43-5.59) and syndecan-1 (1.34, 1.03-1.77). A two-step model using hyaluronan and N-ERC/mesothelin, and combining a threshold decision rule with logistic regression, yielded good discrimination with an area under the ROC curve of 0.99 (95% CI: 0.97-1.00) in the model generation dataset and 0.83 (0.74-0.91) in the validation dataset, respectively. A two-step model using hyaluronan and N-ERC/mesothelin predicts mesothelioma with high specificity. This method can be performed on the first available effusion and could be a useful adjunct to the morphological diagnosis of mesothelioma.

  18. Hyaluronan and N-ERC/mesothelin as key biomarkers in a specific two-step model to predict pleural malignant mesothelioma.

    Directory of Open Access Journals (Sweden)

    Filip Mundt

    Full Text Available PURPOSE: Diagnosis of malignant mesothelioma is challenging. The first available diagnostic material is often an effusion and biochemical analysis of soluble markers may provide additional diagnostic information. This study aimed to establish a predictive model using biomarkers from pleural effusions, to allow early and accurate diagnosis. PATIENTS AND METHODS: Effusions were collected prospectively from 190 consecutive patients at a regional referral centre. Hyaluronan, N-ERC/mesothelin, C-ERC/mesothelin, osteopontin, syndecan-1, syndecan-2, and thioredoxin were measured using ELISA and HPLC. A predictive model was generated and validated using a second prospective set of 375 effusions collected consecutively at a different referral centre. RESULTS: Biochemical markers significantly associated with mesothelioma were hyaluronan (odds ratio, 95% CI: 8.82, 4.82-20.39, N-ERC/mesothelin (4.81, 3.19-7.93, CERC/mesothelin (3.58, 2.43-5.59 and syndecan-1 (1.34, 1.03-1.77. A two-step model using hyaluronan and N-ERC/mesothelin, and combining a threshold decision rule with logistic regression, yielded good discrimination with an area under the ROC curve of 0.99 (95% CI: 0.97-1.00 in the model generation dataset and 0.83 (0.74-0.91 in the validation dataset, respectively. CONCLUSIONS: A two-step model using hyaluronan and N-ERC/mesothelin predicts mesothelioma with high specificity. This method can be performed on the first available effusion and could be a useful adjunct to the morphological diagnosis of mesothelioma.

  19. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators

    Directory of Open Access Journals (Sweden)

    Monteiro KA

    2017-06-01

    Full Text Available Kristina A Monteiro, Paul George, Richard Dollase, Luba Dumenco Office of Medical Education, The Warren Alpert Medical School of Brown University, Providence, RI, USA Abstract: The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE Step 2 clinical knowledge (CK. Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students’ Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218 with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%–69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89 and predicted Step 2 CK score within a mean of four points (SD=8. The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from

  20. Young adults, social networks, and addiction recovery: post treatment changes in social ties and their role as a mediator of 12-step participation.

    Directory of Open Access Journals (Sweden)

    John F Kelly

    Full Text Available Social factors play a key role in addiction recovery. Research with adults indicates individuals with substance use disorder (SUD benefit from mutual-help organizations (MHOs, such as Alcoholics Anonymous, via their ability to facilitate adaptive network changes. Given the lower prevalence of sobriety-conducive, and sobriety-supportive, social contexts in the general population during the life-stage of young adulthood, however, 12-step MHOs may play an even more crucial recovery-supportive social role for young adults, but have not been investigated. Greater knowledge could enhance understanding of recovery-related change and inform young adults' continuing care recommendations.Emerging adults (N = 302; 18-24 yrs; 26% female; 95% White enrolled in a study of residential treatment effectiveness were assessed at intake, 1, 3, 6, and 12 months on 12-step attendance, peer network variables ("high [relapse] risk" and "low [relapse] risk" friends, and treatment outcomes (Percent Days Abstinent; Percent Days Heavy Drinking. Hierarchical linear models tested for change in social risk over time and lagged mediational analyses tested whether 12-step attendance conferred recovery benefits via change in social risk.High-risk friends were common at treatment entry, but decreased during follow-up; low-risk friends increased. Contrary to predictions, while substantial recovery-supportive friend network changes were observed, this was unrelated to 12-step participation and, thus, not found to mediate its positive influence on outcome.Young adult 12-step participation confers recovery benefit; yet, while encouraging social network change, 12-step MHOs may be less able to provide social network change directly for young adults, perhaps because similar-aged peers are less common in MHOs. Findings highlight the importance of both social networks and 12-step MHOs and raise further questions as to how young adults benefit from 12-step MHOs.

  1. Young adults, social networks, and addiction recovery: post treatment changes in social ties and their role as a mediator of 12-step participation.

    Science.gov (United States)

    Kelly, John F; Stout, Robert L; Greene, M Claire; Slaymaker, Valerie

    2014-01-01

    Social factors play a key role in addiction recovery. Research with adults indicates individuals with substance use disorder (SUD) benefit from mutual-help organizations (MHOs), such as Alcoholics Anonymous, via their ability to facilitate adaptive network changes. Given the lower prevalence of sobriety-conducive, and sobriety-supportive, social contexts in the general population during the life-stage of young adulthood, however, 12-step MHOs may play an even more crucial recovery-supportive social role for young adults, but have not been investigated. Greater knowledge could enhance understanding of recovery-related change and inform young adults' continuing care recommendations. Emerging adults (N = 302; 18-24 yrs; 26% female; 95% White) enrolled in a study of residential treatment effectiveness were assessed at intake, 1, 3, 6, and 12 months on 12-step attendance, peer network variables ("high [relapse] risk" and "low [relapse] risk" friends), and treatment outcomes (Percent Days Abstinent; Percent Days Heavy Drinking). Hierarchical linear models tested for change in social risk over time and lagged mediational analyses tested whether 12-step attendance conferred recovery benefits via change in social risk. High-risk friends were common at treatment entry, but decreased during follow-up; low-risk friends increased. Contrary to predictions, while substantial recovery-supportive friend network changes were observed, this was unrelated to 12-step participation and, thus, not found to mediate its positive influence on outcome. Young adult 12-step participation confers recovery benefit; yet, while encouraging social network change, 12-step MHOs may be less able to provide social network change directly for young adults, perhaps because similar-aged peers are less common in MHOs. Findings highlight the importance of both social networks and 12-step MHOs and raise further questions as to how young adults benefit from 12-step MHOs.

  2. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  3. 12-step programs to reduce illicit drug use

    DEFF Research Database (Denmark)

    Filges, Trine; Nielsen, Sine Kirkegaard; Jørgensen, Anne-Marie Klint

    2014-01-01

    Many treatments are not rigorously evaluated as to their effectiveness, and it is uncertain which types of interventions are more effective than others in reducing illicit drug use. The aim of this paper is to provide a systematic mapping of the research literature of the effectiveness of 12-step...... programs in reducing illicit drug use. A systematic literature search was conducted based on 17 international and Nordic Bibliographic databases. A total of 15993 references were screened, and eleven unique studies were finally included in this mapping. The included studies demonstrated conflicting results...... regarding the effectiveness of the 12-step treatment and TSF in reducing individuals’ drug use. Two studies reported a positive effect of the TSF treatment compared to the comparison conditions in reducing drug use. Six studies reported no differences between 12-step program and the comparison condition...

  4. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  5. Does increasing steps per day predict improvement in physical function and pain interference in adults with fibromyalgia?

    Science.gov (United States)

    Kaleth, Anthony S; Slaven, James E; Ang, Dennis C

    2014-12-01

    To examine the concurrent and predictive associations between the number of steps taken per day and clinical outcomes in patients with fibromyalgia (FM). A total of 199 adults with FM (mean age 46.1 years, 95% women) who were enrolled in a randomized clinical trial wore a hip-mounted accelerometer for 1 week and completed self-report measures of physical function (Fibromyalgia Impact Questionnaire-Physical Impairment [FIQ-PI], Short Form 36 [SF-36] health survey physical component score [PCS], pain intensity and interference (Brief Pain Inventory [BPI]), and depressive symptoms (Patient Health Questionnaire-8 [PHQ-8]) as part of their baseline and followup assessments. Associations of steps per day with self-report clinical measures were evaluated from baseline to week 12 using multivariate regression models adjusted for demographic and baseline covariates. Study participants were primarily sedentary, averaging 4,019 ± 1,530 steps per day. Our findings demonstrate a linear relationship between the change in steps per day and improvement in health outcomes for FM. Incremental increases on the order of 1,000 steps per day were significantly associated with (and predictive of) improvements in FIQ-PI, SF-36 PCS, BPI pain interference, and PHQ-8 (all P physical activity. An exercise prescription that includes recommendations to gradually accumulate at least 5,000 additional steps per day may result in clinically significant improvements in outcomes relevant to patients with FM. Future studies are needed to elucidate the dose-response relationship between steps per day and patient outcomes in FM. Copyright © 2014 by the American College of Rheumatology.

  6. 12-step programs for reducing illicit drug use

    DEFF Research Database (Denmark)

    Bøg, Martin; Filges, Trine; Brännström, Lars

    2017-01-01

    12-step programs for reducing illicit drug use are neither better nor worse than other interventions Illicit drug abuse has serious and far-reaching implications for the abuser, their family members, friends, and society as a whole. Preferred intervention programs are those that effectively reduce...... illicit drug use and its negative consequences, and are cost-effective as well. Current evidence shows that overall, 12-step programs are just as effective as alternative, psychosocial interventions. The costs of programs are, therefore, an important consideration. However, the strength of the studies...

  7. Predicting falls in older adults using the four square step test.

    Science.gov (United States)

    Cleary, Kimberly; Skornyakov, Elena

    2017-10-01

    The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.

  8. Multi-step magnetization of the Ising model on a Shastry-Sutherland lattice: a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Huang, W C; Huo, L; Tian, G; Qian, H R; Gao, X S; Qin, M H; Liu, J-M

    2012-01-01

    The magnetization behaviors and spin configurations of the classical Ising model on a Shastry-Sutherland lattice are investigated using Monte Carlo simulations, in order to understand the fascinating magnetization plateaus observed in TmB 4 and other rare-earth tetraborides. The simulations reproduce the 1/2 magnetization plateau by taking into account the dipole-dipole interaction. In addition, a narrow 2/3 magnetization step at low temperature is predicted in our simulation. The multi-step magnetization can be understood as the consequence of the competitions among the spin-exchange interaction, the dipole-dipole interaction, and the static magnetic energy.

  9. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  10. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  11. Finding God through the Spirituality of the 12 Steps of Alcoholics Anonymous

    Directory of Open Access Journals (Sweden)

    Jeff Sandoz

    2014-09-01

    Full Text Available The 12 Step program of Alcoholics Anonymous has provided relief for individuals recovering from alcoholism for over 75 years. The key to the recovery process is a spiritual experience as the result of practicing the daily discipline of the 12 Steps, a process which evokes a psychic change sufficient to recover from this disease. Although a relatively new spiritual discipline, the 12 Step program is built upon a foundation of much older and more traditional paths to God including devotion, understanding, service and meditation. Recent research provides insights into the 12 Step program. Specifically, the path of recovery is highlighted by the reduction of resentment and the promotion of forgiveness which are key factors of recovery.

  12. Stabilization of a three-dimensional limit cycle walking model through step-to-step ankle control.

    Science.gov (United States)

    Kim, Myunghee; Collins, Steven H

    2013-06-01

    Unilateral, below-knee amputation is associated with an increased risk of falls, which may be partially related to a loss of active ankle control. If ankle control can contribute significantly to maintaining balance, even in the presence of active foot placement, this might provide an opportunity to improve balance using robotic ankle-foot prostheses. We investigated ankle- and hip-based walking stabilization methods in a three-dimensional model of human gait that included ankle plantarflexion, ankle inversion-eversion, hip flexion-extension, and hip ad/abduction. We generated discrete feedback control laws (linear quadratic regulators) that altered nominal actuation parameters once per step. We used ankle push-off, lateral ankle stiffness and damping, fore-aft foot placement, lateral foot placement, or all of these as control inputs. We modeled environmental disturbances as random, bounded, unexpected changes in floor height, and defined balance performance as the maximum allowable disturbance value for which the model walked 500 steps without falling. Nominal walking motions were unstable, but were stabilized by all of the step-to-step control laws we tested. Surprisingly, step-by-step modulation of ankle push-off alone led to better balance performance (3.2% leg length) than lateral foot placement (1.2% leg length) for these control laws. These results suggest that appropriate control of robotic ankle-foot prosthesis push-off could make balancing during walking easier for individuals with amputation.

  13. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  14. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  15. Step Prediction During Perturbed Standing Using Center Of Pressure Measurements

    Directory of Open Access Journals (Sweden)

    Milos R. Popovic

    2007-04-01

    Full Text Available The development of a sensor that can measure balance during quiet standing and predict stepping response in the event of perturbation has many clinically relevant applica- tions, including closed-loop control of a neuroprothesis for standing. This study investigated the feasibility of an algorithm that can predict in real-time when an able-bodied individual who is quietly standing will have to make a step to compensate for an external perturbation. Anterior and posterior perturbations were performed on 16 able-bodied subjects using a pul- ley system with a dropped weight. A linear relationship was found between the peak center of pressure (COP velocity and the peak COP displacement caused by the perturbation. This result suggests that one can predict when a person will have to make a step based on COP velocity measurements alone. Another important feature of this finding is that the peak COP velocity occurs considerably before the peak COP displacement. As a result, one can predict if a subject will have to make a step in response to a perturbation sufficiently ahead of the time when the subject is actually forced to make the step. The proposed instability detection algorithm will be implemented in a sensor system using insole sheets in shoes with minitur- ized pressure sensors by which the COPv can be continuously measured. The sensor system will be integrated in a closed-loop feedback system with a neuroprosthesis for standing in the near future.

  16. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    Science.gov (United States)

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  17. Applying a health action model to predict and improve healthy behaviors in coal miners.

    Science.gov (United States)

    Vahedian-Shahroodi, Mohammad; Tehrani, Hadi; Mohammadi, Faeze; Gholian-Aval, Mahdi; Peyman, Nooshin

    2018-05-01

    One of the most important ways to prevent work-related diseases in occupations such as mining is to promote healthy behaviors among miners. This study aimed to predict and promote healthy behaviors among coal miners by using a health action model (HAM). The study was conducted on 200 coal miners in Iran in two steps. In the first step, a descriptive study was implemented to determine predictive constructs and effectiveness of HAM on behavioral intention. The second step involved a quasi-experimental study to determine the effect of an HAM-based education intervention. This intervention was implemented by the researcher and the head of the safety unit based on the predictive construct specified in the first step over 12 sessions of 60 min. The data was collected using an HAM questionnaire and a checklist of healthy behavior. The results of the first step of the study showed that attitude, belief, and normative constructs were meaningful predictors of behavioral intention. Also, the results of the second step revealed that the mean score of attitude and behavioral intention increased significantly after conducting the intervention in the experimental group, while the mean score of these constructs decreased significantly in the control group. The findings of this study showed that HAM-based educational intervention could improve the healthy behaviors of mine workers. Therefore, it is recommended to extend the application of this model to other working groups to improve healthy behaviors.

  18. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    Science.gov (United States)

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  19. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  20. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  1. A two step Bayesian approach for genomic prediction of breeding values

    DEFF Research Database (Denmark)

    Mahdi Shariati, Mohammad; Sørensen, Peter; Janss, Luc

    2012-01-01

    . A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. Methods: The simulated data from the 15th QTL......Background: In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter......-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based...

  2. 12-Step Interventions and Mutual Support Programs for Substance Use Disorders: An Overview

    Science.gov (United States)

    Donovan, Dennis M.; Ingalsbe, Michelle H.; Benbow, James; Daley, Dennis C.

    2013-01-01

    Social workers and other behavioral health professionals are likely to encounter individuals with substance use disorders in a variety of practice settings outside of specialty treatment. 12-Step mutual support programs represent readily available, no cost community-based resources for such individuals; however, practitioners are often unfamiliar with such programs. The present article provides a brief overview of 12-Step programs, the positive substance use and psychosocial outcomes associated with active 12-Step involvement, and approaches ranging from ones that can be utilized by social workers in any practice setting to those developed for specialty treatment programs to facilitate engagement in 12-Step meetings and recovery activities. The goal is to familiarize social workers with 12-Step approaches so that they are better able to make informed referrals that match clients to mutual support groups that best meet the individual’s needs and maximize the likelihood of engagement and positive outcomes. PMID:23731422

  3. Wind Speed Prediction Using a Univariate ARIMA Model and a Multivariate NARX Model

    Directory of Open Access Journals (Sweden)

    Erasmo Cadenas

    2016-02-01

    Full Text Available Two on step ahead wind speed forecasting models were compared. A univariate model was developed using a linear autoregressive integrated moving average (ARIMA. This method’s performance is well studied for a large number of prediction problems. The other is a multivariate model developed using a nonlinear autoregressive exogenous artificial neural network (NARX. This uses the variables: barometric pressure, air temperature, wind direction and solar radiation or relative humidity, as well as delayed wind speed. Both models were developed from two databases from two sites: an hourly average measurements database from La Mata, Oaxaca, Mexico, and a ten minute average measurements database from Metepec, Hidalgo, Mexico. The main objective was to compare the impact of the various meteorological variables on the performance of the multivariate model of wind speed prediction with respect to the high performance univariate linear model. The NARX model gave better results with improvements on the ARIMA model of between 5.5% and 10. 6% for the hourly database and of between 2.3% and 12.8% for the ten minute database for mean absolute error and mean squared error, respectively.

  4. Logistic regression analysis to predict Medical Licensing Examination of Thailand (MLET) Step1 success or failure.

    Science.gov (United States)

    Wanvarie, Samkaew; Sathapatayavongs, Boonmee

    2007-09-01

    The aim of this paper was to assess factors that predict students' performance in the Medical Licensing Examination of Thailand (MLET) Step1 examination. The hypothesis was that demographic factors and academic records would predict the students' performance in the Step1 Licensing Examination. A logistic regression analysis of demographic factors (age, sex and residence) and academic records [high school grade point average (GPA), National University Entrance Examination Score and GPAs of the pre-clinical years] with the MLET Step1 outcome was accomplished using the data of 117 third-year Ramathibodi medical students. Twenty-three (19.7%) students failed the MLET Step1 examination. Stepwise logistic regression analysis showed that the significant predictors of MLET Step1 success/failure were residence background and GPAs of the second and third preclinical years. For students whose sophomore and third-year GPAs increased by an average of 1 point, the odds of passing the MLET Step1 examination increased by a factor of 16.3 and 12.8 respectively. The minimum GPAs for students from urban and rural backgrounds to pass the examination were estimated from the equation (2.35 vs 2.65 from 4.00 scale). Students from rural backgrounds and/or low-grade point averages in their second and third preclinical years of medical school are at risk of failing the MLET Step1 examination. They should be given intensive tutorials during the second and third pre-clinical years.

  5. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  6. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  7. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  8. Age differences in outcomes among patients in the "Stimulant Abuser Groups to Engage in 12-Step" (STAGE-12) intervention.

    Science.gov (United States)

    Garrett, Sharon B; Doyle, Suzanne R; Peavy, K Michelle; Wells, Elizabeth A; Owens, Mandy D; Shores-Wilson, Kathy; DiCenzo, Jessica; Donovan, Dennis M

    2018-01-01

    Emerging adults (roughly 18-29years) with substance use disorders can benefit from participation in twelve-step mutual-help organizations (TSMHO), however their attendance and participation in such groups is relatively low. Twelve-step facilitation therapies, such as the Stimulant Abuser Groups to Engage in 12-Step (STAGE-12), may increase attendance and involvement, and lead to decreased substance use. Analyses examined whether age moderated the STAGE-12 effects on substance use and TSMHO meeting attendance and participation. We utilized data from a multisite randomized controlled trial, with assessments at baseline, mid-treatment (week 4), end-of-treatment (week 8), and 3- and 6- months post-randomization. Participants were adults with DSM-IV diagnosed stimulant abuse or dependence (N=450) enrolling in 10 intensive outpatient substance use treatment programs across the U.S. A zero-inflated negative binomial random-effects regression model was utilized to examine age-by-treatment interactions on substance use and meeting attendance and involvement. Younger age was associated with larger treatment effects for stimulant use. Specifically, younger age was associated with greater odds of remaining abstinent from stimulants in STAGE-12 versus Treatment-as-Usual; however, among those who were not abstinent during treatment, younger age was related to greater rates of stimulant use at follow-up for those in STAGE-12 compared to TAU. There was no main effect of age on stimulant use. Younger age was also related to somewhat greater active involvement in different types of TSMHO activities among those in STAGE-12 versus TAU. There were no age-by-treatment interactions for other types of substance use or for treatment attendance, however, in contrast to stimulant use; younger age was associated with lower odds of abstinence from non-stimulant drugs at follow-up, regardless of treatment condition. These results suggest that STAGE-12 can be beneficial for some emerging adults

  9. The cc-bar and bb-bar spectroscopy in the two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaiserslautern Univ.

    1984-07-01

    We investigate the spectroscopy of the charmonium (cc-bar) and bottonium (bb-bar) bound states in a static flavour independent nonrelativistic quark-antiquark (qq-bar) two-step potential model proposed earlier. Our predictions are in good agreement with experimental data and with other theoretical predictions. (author)

  10. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  11. Inmate Prerelease Assessment (IPASS) Aftercare Placement Recommendation as a Predictor of Rural Inmate's 12-Step Attendance and Treatment Entry Postrelease

    Science.gov (United States)

    Oser, Carrie B.; Biebel, Elizabeth P.; Havens, Jennifer R.; Staton-Tindall, Michele; Knudsen, Hannah K.; Mooney, Jenny L.; Leukefeld, Carl G.

    2009-01-01

    The purpose of this study is to use the Criminal Justice Drug Abuse Treatment Studies' (CJ-DATS) Inmate Prerelease Assessment (IPASS), which recommends either intensive or nonintensive treatment after release, to predict rural offenders' 12-step attendance and treatment entry within six months of release from prison. IPASS scores indicated that…

  12. Body configuration at first stepping-foot contact predicts backward balance recovery capacity in people with chronic stroke.

    Science.gov (United States)

    de Kam, Digna; Roelofs, Jolanda M B; Geurts, Alexander C H; Weerdesteyn, Vivian

    2018-01-01

    To determine the predictive value of leg and trunk inclination angles at stepping-foot contact for the capacity to recover from a backward balance perturbation with a single step in people after stroke. Twenty-four chronic stroke survivors and 21 healthy controls were included in a cross-sectional study. We studied reactive stepping responses by subjecting participants to multidirectional stance perturbations at different intensities on a translating platform. In this paper we focus on backward perturbations. Participants were instructed to recover from the perturbations with maximally one step. A trial was classified as 'success' if balance was restored according to this instruction. We recorded full-body kinematics and computed: 1) body configuration parameters at first stepping-foot contact (leg and trunk inclination angles) and 2) spatiotemporal step parameters (step onset, step length, step duration and step velocity). We identified predictors of balance recovery capacity using a stepwise logistic regression. Perturbation intensity was also included as a predictor. The model with spatiotemporal parameters (perturbation intensity, step length and step duration) could correctly classify 85% of the trials as success or fail (Nagelkerke R2 = 0.61). In the body configuration model (Nagelkerke R2 = 0.71), perturbation intensity and leg and trunk angles correctly classified the outcome of 86% of the recovery attempts. The goodness of fit was significantly higher for the body configuration model compared to the model with spatiotemporal variables (pmodel. Body configuration at stepping-foot contact is a valid and clinically feasible indicator of backward fall risk in stroke survivors, given its potential to be derived from a single sagittal screenshot.

  13. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  14. Addiction Recovery: 12-Step Programs and Cognitive-Behavioral Psychology.

    Science.gov (United States)

    Bristow-Braitman, Ann

    1995-01-01

    Provides helping professionals with an overview of treatment issues referred to as spiritual by those recovering from alcohol and drug addictions through 12-step programs. Reviews conflicts between academically trained helping professionals and researchers, and those advocating spiritually oriented treatment programs. Discusses spiritual…

  15. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  16. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  17. Critical flux determination by flux-stepping

    DEFF Research Database (Denmark)

    Beier, Søren; Jonsson, Gunnar Eigil

    2010-01-01

    In membrane filtration related scientific literature, often step-by-step determined critical fluxes are reported. Using a dynamic microfiltration device, it is shown that critical fluxes determined from two different flux-stepping methods are dependent upon operational parameters such as step...... length, step height, and.flux start level. Filtrating 8 kg/m(3) yeast cell suspensions by a vibrating 0.45 x 10(-6) m pore size microfiltration hollow fiber module, critical fluxes from 5.6 x 10(-6) to 1.2 x 10(-5) m/s have been measured using various step lengths from 300 to 1200 seconds. Thus......, such values are more or less useless in itself as critical flux predictors, and constant flux verification experiments have to be conducted to check if the determined critical fluxes call predict sustainable flux regimes. However, it is shown that using the step-by-step predicted critical fluxes as start...

  18. A simple one-step chemistry model for partially premixed hydrocarbon combustion

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2006-10-15

    This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)

  19. Rapid response predicts 12-month post-treatment outcomes in binge-eating disorder: theoretical and clinical implications

    Science.gov (United States)

    Grilo, C. M.; White, M. A.; Wilson, G. T.; Gueorguieva, R.; Masheb, R. M.

    2011-01-01

    Background We examined rapid response in obese patients with binge-eating disorder (BED) in a clinical trial testing cognitive behavioral therapy (CBT) and behavioral weight loss (BWL). Method Altogether, 90 participants were randomly assigned to CBT or BWL. Assessments were performed at baseline, throughout and post-treatment and at 6- and 12-month follow-ups. Rapid response, defined as ≥70% reduction in binge eating by week four, was determined by receiver operating characteristic curves and used to predict outcomes. Results Rapid response characterized 57% of participants (67% of CBT, 47% of BWL) and was unrelated to most baseline variables. Rapid response predicted greater improvements across outcomes but had different prognostic significance and distinct time courses for CBT versus BWL. Patients receiving CBT did comparably well regardless of rapid response in terms of reduced binge eating and eating disorder psychopathology but did not achieve weight loss. Among patients receiving BWL, those without rapid response failed to improve further. However, those with rapid response were significantly more likely to achieve binge-eating remission (62% v. 13%) and greater reductions in binge-eating frequency, eating disorder psychopathology and weight loss. Conclusions Rapid response to treatment in BED has prognostic significance through 12-month follow-up, provides evidence for treatment specificity and has clinical implications for stepped-care treatment models for BED. Rapid responders who receive BWL benefit in terms of both binge eating and short-term weight loss. Collectively, these findings suggest that BWL might be a candidate for initial intervention in stepped-care models with an evaluation of progress after 1 month to identify non-rapid responders who could be advised to consider a switch to a specialized treatment. PMID:21923964

  20. One-Step-Ahead Predictive Control for Hydroturbine Governor

    Directory of Open Access Journals (Sweden)

    Zhihuai Xiao

    2015-01-01

    Full Text Available The hydroturbine generator regulating system can be considered as one system synthetically integrating water, machine, and electricity. It is a complex and nonlinear system, and its configuration and parameters are time-dependent. A one-step-ahead predictive control based on on-line trained neural networks (NNs for hydroturbine governor with variation in gate position is described in this paper. The proposed control algorithm consists of a one-step-ahead neuropredictor that tracks the dynamic characteristics of the plant and predicts its output and a neurocontroller to generate the optimal control signal. The weights of two NNs, initially trained off-line, are updated on-line according to the scalar error. The proposed controller can thus track operating conditions in real-time and produce the optimal control signal over the wide operating range. Only the inputs and outputs of the generator are measured and there is no need to determine the other states of the generator. Simulations have been performed with varying operating conditions and different disturbances to compare the performance of the proposed controller with that of a conventional PID controller and validate the feasibility of the proposed approach.

  1. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  2. A simplified baseline prediction model for joint damage progression in rheumatoid arthritis: a step toward personalized medicine.

    Science.gov (United States)

    de Punder, Yvonne M R; van Riel, Piet L C M; Fransen, Jaap

    2015-03-01

    To compare the performance of an extended model and a simplified prognostic model for joint damage in rheumatoid arthritis (RA) based on 3 baseline risk factors: anticyclic citrullinated peptide antibodies (anti-CCP), erosions, and acute-phase reaction. Data were used from the Nijmegen early RA cohort. An extended model and a simplified baseline prediction model were developed to predict joint damage progression between 0 and 3 years. Joint damage progression was assessed using the Ratingen score. In the extended model, prediction factors were positivity for anti-CCP and/or rheumatoid factor, the level of erythrocyte sedimentation rate, and the quantity of erosions. The prediction score was calculated as the sum of the regression coefficients. In the simplified model, the prediction factors were dichotomized and the number of risk factors was counted. Performances of both models were compared using discrimination and calibration. The models were internally validated using bootstrapping. The extended model resulted in a prediction score between 0 and 5.6 with an area under the receiver-operation characteristic (ROC) curve of 0.77 (95% CI 0.72-0.81). The simplified model resulted in a prediction score between 0 and 3. This model had an area under the ROC curve of 0.75 (95% CI 0.70-0.80). In internal validation, the 2 models showed reasonably well the agreement between observed and predicted probabilities for joint damage progression (Hosmer-Lemeshow test p > 0.05 and calibration slope near 1.0). A simple prediction model for joint damage progression in early RA, by only counting the number of risk factors, has adequate performance. This facilitates the translation of the theoretical prognostic models to daily clinical practice.

  3. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  4. Predicting severe injury using vehicle telemetry data.

    Science.gov (United States)

    Ayoung-Chee, Patricia; Mack, Christopher D; Kaufman, Robert; Bulger, Eileen

    2013-01-01

    In 2010, the National Highway Traffic Safety Administration standardized collision data collected by event data recorders, which may help determine appropriate emergency medical service (EMS) response. Previous models (e.g., General Motors ) predict severe injury (Injury Severity Score [ISS] > 15) using occupant demographics and collision data. Occupant information is not automatically available, and 12% of calls from advanced automatic collision notification providers are unanswered. To better inform EMS triage, our goal was to create a predictive model only using vehicle collision data. Using the National Automotive Sampling System Crashworthiness Data System data set, we included front-seat occupants in late-model vehicles (2000 and later) in nonrollover and rollover crashes in years 2000 to 2010. Telematic (change in velocity, direction of force, seat belt use, vehicle type and curb weight, as well as multiple impact) and nontelematic variables (maximum intrusion, narrow impact, and passenger ejection) were included. Missing data were multiply imputed. The University of Washington model was tested to predict severe injury before application of guidelines (Step 0) and for occupants who did not meet Steps 1 and 2 criteria (Step 3) of the Centers for Disease Control and Prevention Field Triage Guidelines. A probability threshold of 20% was chosen in accordance with Centers for Disease Control and Prevention recommendations. There were 28,633 crashes, involving 33,956 vehicles and 52,033 occupants, of whom 9.9% had severe injury. At Step 0, the University of Washington model sensitivity was 40.0% and positive predictive value (PPV) was 20.7%. At Step 3, the sensitivity was 32.3 % and PPV was 10.1%. Model analysis excluding nontelematic variables decreased sensitivity and PPV. The sensitivity of the re-created General Motors model was 38.5% at Step 0 and 28.1% at Step 3. We designed a model using only vehicle collision data that was predictive of severe injury at

  5. 12-Step participation reduces medical use costs among adolescents with a history of alcohol and other drug treatment.

    Science.gov (United States)

    Mundt, Marlon P; Parthasarathy, Sujaya; Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I

    2012-11-01

    Adolescents who attend 12-step groups following alcohol and other drug (AOD) treatment are more likely to remain abstinent and to avoid relapse post-treatment. We examined whether 12-step attendance is also associated with a corresponding reduction in health care use and costs. We used difference-in-difference analysis to compare changes in seven-year follow-up health care use and costs by changes in 12-step participation. Four Kaiser Permanente Northern California AOD treatment programs enrolled 403 adolescents, 13-18-years old, into a longitudinal cohort study upon AOD treatment entry. Participants self-reported 12-step meeting attendance at six-month, one-year, three-year, and five-year follow-up. Outcomes included counts of hospital inpatient days, emergency room (ER) visits, primary care visits, psychiatric visits, AOD treatment costs and total medical care costs. Each additional 12-step meeting attended was associated with an incremental medical cost reduction of 4.7% during seven-year follow-up. The medical cost offset was largely due to reductions in hospital inpatient days, psychiatric visits, and AOD treatment costs. We estimate total medical use cost savings at $145 per year (in 2010 U.S. dollars) per additional 12-step meeting attended. The findings suggest that 12-step participation conveys medical cost offsets for youth who undergo AOD treatment. Reduced costs may be related to improved AOD outcomes due to 12-step participation, improved general health due to changes in social network following 12-step participation, or better compliance to both AOD treatment and 12-step meetings. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. The 12 Steps of Addiction Recovery Programs as an influence on leadership development: a personal narrative

    Directory of Open Access Journals (Sweden)

    Friedman Mitchell

    2016-12-01

    Full Text Available My participation in a 12-step addiction program based on the principles and traditions of Alcoholics Anonymous (AA has been critical for my leadership development. As I worked to refrain from addictive behaviors and practiced 12-step principles, I experienced a shift from individualistic, self-centered leadership towards a servant leader orientation. I thus consider the 12-step recovery process, which commenced in 2001, a leadership formative experience (LFE as it had the greatest influence on my subsequent development. My experience of thinking about and rethinking my life in reference to leadership and followership lends itself to a personal inquiry. It draws on work on the12 steps; self-assessments and personal journal entries; and memory of life events. I aim to contribute to the leadership development literature by exploring the influence of participation in a 12-step recovery program and posing it as an LFE, subjects that have received little attention.

  7. OTA-Grapes: A Mechanistic Model to Predict Ochratoxin A Risk in Grapes, a Step beyond the Systems Approach

    Directory of Open Access Journals (Sweden)

    Battilani Paola

    2015-08-01

    Full Text Available Ochratoxin A (OTA is a fungal metabolite dangerous for human and animal health due to its nephrotoxic, immunotoxic, mutagenic, teratogenic and carcinogenic effects, classified by the International Agency for Research on Cancer in group 2B, possible human carcinogen. This toxin has been stated as a wine contaminant since 1996. The aim of this study was to develop a conceptual model for the dynamic simulation of the A. carbonarius life cycle in grapes along the growing season, including OTA production in berries. Functions describing the role of weather parameters in each step of the infection cycle were developed and organized in a prototype model called OTA-grapes. Modelling the influence of temperature on OTA production, it emerged that fungal strains can be shared in two different clusters, based on the dynamic of OTA production and according to the optimal temperature. Therefore, two functions were developed, and based on statistical data analysis, it was assumed that the two types of strains contribute equally to the population. Model validation was not possible because of poor OTA contamination data, but relevant differences in OTA-I, the output index of the model, were noticed between low and high risk areas. To our knowledge, this is the first attempt to assess/model A. carbonarius in order to predict the risk of OTA contamination in grapes.

  8. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  9. Predictive modeling of Bifidobacterium animalis subsp. lactis Bb-12 growth in cow’s, goat’s and soy milk

    Directory of Open Access Journals (Sweden)

    Vedran Slačanac

    2013-11-01

    Full Text Available The aim of this study was to use a predictive model to analyse the growth of a probiotic strain Bifidobacterium animalis subsp. lactis Bb-12 in cow’s, goat’s and soy milk. The Gompertz model was used, and the suitability of the model was estimated by the Schnute algorithm. Except for the analysis of Bifidobacterium animalis subsp. lactis Bb-12 growth, the Gompertz model was also used for the analysis of pH changes during the fermentation process. Experimental results, as well as the values of kinetic parameters obtained in this study, showed that the highest growth rate of Bifidobacterium animalis subsp. lactis Bb-12 was obtained in goat’s milk, and the lowest in soy milk. Contrary to the growth of Bifidobacterium animalis subsp. lactis Bb-12, pH decreased faster in soy milk than in cow’s milk. The highest rate of pH decrease was also observed in goat’s milk, which is in correspondence with results of various previous studies. The Gompertz model proved to be highly suitable for analysing the course and the fermentation kinetics in these three kinds of milk, and might be used to analyse the growth kinetics of other probiotic and starter cultures in milk.

  10. Modelling and Fixed Step Simulation of a Turbo Charged Diesel Engine

    OpenAIRE

    Ritzén, Jesper

    2003-01-01

    Having an engine model that is accurate but not too complicated is desirable when working with on-board diagnosis or engine control. In this thesis a four state mean value model is introduced. To make the model usable in an on-line automotive application it is discrete and simulated with a fixed step size solver. Modelling is done with simplicity as main object. Some simple static models are also presented. To validate the model measuring is carried out in a Scania R124LB truck with a 12 lit...

  11. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  12. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  13. Toward the prediction of class I and II mouse major histocompatibility complex-peptide-binding affinity: in silico bioinformatic step-by-step guide using quantitative structure-activity relationships.

    Science.gov (United States)

    Hattotuwagama, Channa K; Doytchinova, Irini A; Flower, Darren R

    2007-01-01

    Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method

  14. Alcohol and drug treatment involvement, 12-step attendance and abstinence: 9-year cross-lagged analysis of adults in an integrated health plan.

    Science.gov (United States)

    Witbrodt, Jane; Ye, Yu; Bond, Jason; Chi, Felicia; Weisner, Constance; Mertens, Jennifer

    2014-04-01

    This study explored causal relationships between post-treatment 12-step attendance and abstinence at multiple data waves and examined indirect paths leading from treatment initiation to abstinence 9-years later. Adults (N = 1945) seeking help for alcohol or drug use disorders from integrated healthcare organization outpatient treatment programs were followed at 1-, 5-, 7- and 9-years. Path modeling with cross-lagged partial regression coefficients was used to test causal relationships. Cross-lagged paths indicated greater 12-step attendance during years 1 and 5 and were casually related to past-30-day abstinence at years 5 and 7 respectfully, suggesting 12-step attendance leads to abstinence (but not vice versa) well into the post-treatment period. Some gender differences were found in these relationships. Three significant time-lagged, indirect paths emerged linking treatment duration to year-9 abstinence. Conclusions are discussed in the context of other studies using longitudinal designs. For outpatient clients, results reinforce the value of lengthier treatment duration and 12-step attendance in year 1. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  16. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  17. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  18. Diffraction model of a step-out transition

    Energy Technology Data Exchange (ETDEWEB)

    Chao, A.W.; Zimmermann, F.

    1996-06-01

    The diffraction model of a cavity, suggested by Lawson, Bane and Sands is generalized to a step out transition. Using this model, the high frequency impedance is calculated explicitly for the case that the transition step is small compared with the beam pipe radius. In the diffraction model for a small step out transition, the total energy is conserved, but, unlike the cavity case, the diffracted waves in the geometric shadow and the pipe region, in general, do not always carry equal energy. In the limit of small step sizes, the impedance derived from the diffraction model agrees with that found by Balakin, Novokhatsky and also Kheifets. This impedance can be used to compute the wake field of a round collimator whose half aperture is much larger than the bunch length, as existing in the SLC final focus.

  19. Predicting algal growth inhibition toxicity: three-step strategy using structural and physicochemical properties.

    Science.gov (United States)

    Furuhama, A; Hasunuma, K; Hayashi, T I; Tatarazako, N

    2016-05-01

    We propose a three-step strategy that uses structural and physicochemical properties of chemicals to predict their 72 h algal growth inhibition toxicities against Pseudokirchneriella subcapitata. In Step 1, using a log D-based criterion and structural alerts, we produced an interspecies QSAR between algal and acute daphnid toxicities for initial screening of chemicals. In Step 2, we categorized chemicals according to the Verhaar scheme for aquatic toxicity, and we developed QSARs for toxicities of Class 1 (non-polar narcotic) and Class 2 (polar narcotic) chemicals by means of simple regression with a hydrophobicity descriptor and multiple regression with a hydrophobicity descriptor and a quantum chemical descriptor. Using the algal toxicities of the Class 1 chemicals, we proposed a baseline QSAR for calculating their excess toxicities. In Step 3, we used structural profiles to predict toxicity either quantitatively or qualitatively and to assign chemicals to the following categories: Pesticide, Reactive, Toxic, Toxic low and Uncategorized. Although this three-step strategy cannot be used to estimate the algal toxicities of all chemicals, it is useful for chemicals within its domain. The strategy is also applicable as a component of Integrated Approaches to Testing and Assessment.

  20. Humility and 12-Step Recovery: A Prolegomenon for the Empirical Investigation of a Cardinal Virtue in Alcoholics Anonymous.

    Science.gov (United States)

    Post, Stephen G; Pagano, Maria E; Lee, Matthew T; Johnson, Byron R

    Alcoholics Anonymous (AA) offers a live stage to study how humility is worn by thousands for another day of sobriety and more freedom from the bondage of self. It has been the coauthors' intent to emphasize the significance of humility as a cardinal virtue across the 12-Step program and as essential to all its key elements. The coauthors have placed this emphasis in the context of a wider theological history of thought as this converged on Bill W. and AA. In addition, the coauthors have offered a constructive developmental interpretation of the 12 Steps that relies on a model of four modulations of humility. Finally, the coauthors have reviewed in brief some approaches to the measurement of humility in this context, and suggest several aims for future research.

  1. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  2. The construction of geological model using an iterative approach (Step 1 and Step 2)

    International Nuclear Information System (INIS)

    Matsuoka, Toshiyuki; Kumazaki, Naoki; Saegusa, Hiromitsu; Sasaki, Keiichi; Endo, Yoshinobu; Amano, Kenji

    2005-03-01

    One of the main goals of the Mizunami Underground Research Laboratory (MIU) Project is to establish appropriate methodologies for reliably investigating and assessing the deep subsurface. This report documents the results of geological modeling of Step 1 and Step 2 using the iterative investigation approach at the site-scale (several 100m to several km in area). For the Step 1 model, existing information (e.g. literature), and results from geological mapping and reflection seismic survey were used. For the Step 2 model, additional information obtained from the geological investigation using existing borehole and the shallow borehole investigation were incorporated. As a result of this study, geological elements that should be represented in the model were defined, and several major faults with trends of NNW, EW and NE trend were identified (or inferred) in the vicinity of the MIU-site. (author)

  3. Predictive modeling of neuroanatomic structures for brain atrophy detection

    Science.gov (United States)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  4. The 12 Steps of Addiction Recovery Programs as an Influence on Leadership Development: A Personal Narrative

    Science.gov (United States)

    Friedman, Mitchell

    2016-01-01

    My participation in a 12-step addiction program based on the principles and traditions of Alcoholics Anonymous (AA) has been critical for my leadership development. As I worked to refrain from addictive behaviors and practiced 12-step principles, I experienced a shift from individualistic, self-centered leadership towards a servant leader…

  5. Development process and data management of TurnSTEP, a STEP-compliant CNC system for turning

    NARCIS (Netherlands)

    Choi, I.; Suh, S.-H; Kim, K.; Song, M.S.; Jang, M.; Lee, B.-E.

    2006-01-01

    TurnSTEP is one of the earliest STEP-compliant CNC systems for turning. Based on the STEP-NC data model formalized as ISO 14649-12 and 121, it is designed to support intelligent and autonomous control of NC machines for e-manufacturing. The present paper introduces the development process and data

  6. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    Science.gov (United States)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  7. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    Science.gov (United States)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  8. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  9. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  10. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    Science.gov (United States)

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  11. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  12. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.

    2010-09-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  13. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.; Kouramas, K.I.; Georgiadis, M.C.; Pistikopoulos, E.N.

    2010-01-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  14. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  15. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  16. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    Science.gov (United States)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  17. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (Step 0 and Step 1)

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations, analysis, and evaluations have been conducted using an iterative approach. In this study, hydrogeological modeling and ground water flow analyses have been carried out using the data from surface-based investigations at Step 0 and Step 1, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) As the investigation progresses Step 0 to Step 1, the understanding of groundwater flow was enhanced from Step 0 to Step 1, and the hydrogeological model could be revised, 2) The importance of faults as major groundwater flow pathways was demonstrated, 3) Geological and hydrogeological characteristics of faults with orientation of NNW and NE were shown to be especially significant. The main item specified for further investigations is summarized as follows: geological and hydrogeological characteristics of NNW and NE trending faults are important. (author)

  18. AKTIS Nr. 12: To better understand radioactive aerosol deposit in order to better measure it; Radio-induced lesions: a new step towards healing; Modelling the collapse of an immersed grain column; To better model soot deposit; Towards the prediction of the leakage rate of containment enclosures

    International Nuclear Information System (INIS)

    Benderitter, Marc; Perales, Frederic; Monerie, Yann; Maro, Denis; Boyer, Patrick; Lemaitre, Pascal; Porcheron, Emmanuel; Depuydt, Guillaume; Masson, Olivier; Gensdarmes, Francois

    2013-04-01

    This publication presents the main results of researches undertaken by the IRSN in the field of radiation protection, nuclear safety and security. The topics herein addressed are: radio-induced lesions as a new step towards healing (case of injection mesenchymal stem cells for the treatment of induced severe colorectal lesions), the modelling of the collapse of an immersed grain column (to study the nuclear fuel behaviour in an accidental situation through a modelling of fluid-grain interactions), a better understanding of radioactive aerosol deposit (to study particle or aerosol deposits after radioactive releases in the atmosphere in case of accident), a better modelling of soot deposits (in case of fire), the prediction of leakage rates of containment enclosures (ageing phenomena of installations, systems and equipment, with the case of cracks due to material ageing and resulting in confinement losses which could thus be quantified)

  19. Daily step count predicts acute exacerbations in a US cohort with COPD.

    Science.gov (United States)

    Moy, Marilyn L; Teylan, Merilee; Weston, Nicole A; Gagnon, David R; Garshick, Eric

    2013-01-01

    COPD is characterized by variability in exercise capacity and physical activity (PA), and acute exacerbations (AEs). Little is known about the relationship between daily step count, a direct measure of PA, and the risk of AEs, including hospitalizations. In an observational cohort study of 169 persons with COPD, we directly assessed PA with the StepWatch Activity Monitor, an ankle-worn accelerometer that measures daily step count. We also assessed exercise capacity with the 6-minute walk test (6MWT) and patient-reported PA with the St. George's Respiratory Questionnaire Activity Score (SGRQ-AS). AEs and COPD-related hospitalizations were assessed and validated prospectively over a median of 16 months. Mean daily step count was 5804±3141 steps. Over 209 person-years of observation, there were 263 AEs (incidence rate 1.3±1.6 per person-year) and 116 COPD-related hospitalizations (incidence rate 0.56±1.09 per person-year). Adjusting for FEV1 % predicted and prednisone use for AE in previous year, for each 1000 fewer steps per day walked at baseline, there was an increased rate of AEs (rate ratio 1.07; 95%CI = 1.003-1.15) and COPD-related hospitalizations (rate ratio 1.24; 95%CI = 1.08-1.42). There was a significant linear trend of decreasing daily step count by quartiles and increasing rate ratios for AEs (P = 0.008) and COPD-related hospitalizations (P = 0.003). Each 30-meter decrease in 6MWT distance was associated with an increased rate ratio of 1.07 (95%CI = 1.01-1.14) for AEs and 1.18 (95%CI = 1.07-1.30) for COPD-related hospitalizations. Worsening of SGRQ-AS by 4 points was associated with an increased rate ratio of 1.05 (95%CI = 1.01-1.09) for AEs and 1.10 (95%CI = 1.02-1.17) for COPD-related hospitalizations. Lower daily step count, lower 6MWT distance, and worse SGRQ-AS predict future AEs and COPD-related hospitalizations, independent of pulmonary function and previous AE history. These results support the importance of

  20. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  1. A pilot randomized clinical trial testing integrated 12-Step facilitation (iTSF) treatment for adolescent substance use disorder.

    Science.gov (United States)

    Kelly, John F; Kaminer, Yifrah; Kahler, Christopher W; Hoeppner, Bettina; Yeterian, Julie; Cristello, Julie V; Timko, Christine

    2017-12-01

    The integration of 12-Step philosophy and practices is common in adolescent substance use disorder (SUD) treatment programs, particularly in North America. However, although numerous experimental studies have tested 12-Step facilitation (TSF) treatments among adults, no studies have tested TSF-specific treatments for adolescents. We tested the efficacy of a novel integrated TSF. Explanatory, parallel-group, randomized clinical trial comparing 10 sessions of either motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT; n = 30) or a novel integrated TSF (iTSF; n = 29), with follow-up assessments at 3, 6 and 9 months following treatment entry. Out-patient addiction clinic in the United States. Adolescents [n = 59; mean age = 16.8 (1.7) years; range = 14-21; 27% female; 78% white]. The iTSF integrated 12-Step with motivational and cognitive-behavioral strategies, and was compared with state-of-the-art MET/CBT for SUD. Primary outcome: percentage days abstinent (PDA); secondary outcomes: 12-Step attendance, substance-related consequences, longest period of abstinence, proportion abstinent/mostly abstinent, psychiatric symptoms. Primary outcome: PDA was not significantly different across treatments [b = 0.08, 95% confidence interval (CI) = -0.08 to 0.24, P = 0.33; Bayes' factor = 0.28). during treatment, iTSF patients had substantially greater 12-Step attendance, but this advantage declined thereafter (b = -0.87; 95% CI = -1.67 to 0.07, P = 0.03). iTSF did show a significant advantage at all follow-up points for substance-related consequences (b = -0.42; 95% CI = -0.80 to -0.04, P Step meeting attendance was associated significantly with longer abstinence during (r = 0.39, P = 0.008), and early following (r = 0.30, P = 0.049), treatment. Compared with motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT), in terms of abstinence, a novel integrated 12-Step facilitation treatment for adolescent

  2. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  3. The Accuracy and Bias of Single-Step Genomic Prediction for Populations Under Selection

    Directory of Open Access Journals (Sweden)

    Wan-Ling Hsu

    2017-08-01

    Full Text Available In single-step analyses, missing genotypes are explicitly or implicitly imputed, and this requires centering the observed genotypes using the means of the unselected founders. If genotypes are only available for selected individuals, centering on the unselected founder mean is not straightforward. Here, computer simulation is used to study an alternative analysis that does not require centering genotypes but fits the mean μg of unselected individuals as a fixed effect. Starting with observed diplotypes from 721 cattle, a five-generation population was simulated with sire selection to produce 40,000 individuals with phenotypes, of which the 1000 sires had genotypes. The next generation of 8000 genotyped individuals was used for validation. Evaluations were undertaken with (J or without (N μg when marker covariates were not centered; and with (JC or without (C μg when all observed and imputed marker covariates were centered. Centering did not influence accuracy of genomic prediction, but fitting μg did. Accuracies were improved when the panel comprised only quantitative trait loci (QTL; models JC and J had accuracies of 99.4%, whereas models C and N had accuracies of 90.2%. When only markers were in the panel, the 4 models had accuracies of 80.4%. In panels that included QTL, fitting μg in the model improved accuracy, but had little impact when the panel contained only markers. In populations undergoing selection, fitting μg in the model is recommended to avoid bias and reduction in prediction accuracy due to selection.

  4. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    Science.gov (United States)

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  5. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  6. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  7. Prediction Model for Relativistic Electrons at Geostationary Orbit

    Science.gov (United States)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  8. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    Science.gov (United States)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  9. Toward integration of genomic selection with crop modelling: the development of an integrated approach to predicting rice heading dates.

    Science.gov (United States)

    Onogi, Akio; Watanabe, Maya; Mochizuki, Toshihiro; Hayashi, Takeshi; Nakagawa, Hiroshi; Hasegawa, Toshihiro; Iwata, Hiroyoshi

    2016-04-01

    It is suggested that accuracy in predicting plant phenotypes can be improved by integrating genomic prediction with crop modelling in a single hierarchical model. Accurate prediction of phenotypes is important for plant breeding and management. Although genomic prediction/selection aims to predict phenotypes on the basis of whole-genome marker information, it is often difficult to predict phenotypes of complex traits in diverse environments, because plant phenotypes are often influenced by genotype-environment interaction. A possible remedy is to integrate genomic prediction with crop/ecophysiological modelling, which enables us to predict plant phenotypes using environmental and management information. To this end, in the present study, we developed a novel method for integrating genomic prediction with phenological modelling of Asian rice (Oryza sativa, L.), allowing the heading date of untested genotypes in untested environments to be predicted. The method simultaneously infers the phenological model parameters and whole-genome marker effects on the parameters in a Bayesian framework. By cultivating backcross inbred lines of Koshihikari × Kasalath in nine environments, we evaluated the potential of the proposed method in comparison with conventional genomic prediction, phenological modelling, and two-step methods that applied genomic prediction to phenological model parameters inferred from Nelder-Mead or Markov chain Monte Carlo algorithms. In predicting heading dates of untested lines in untested environments, the proposed and two-step methods tended to provide more accurate predictions than the conventional genomic prediction methods, particularly in environments where phenotypes from environments similar to the target environment were unavailable for training genomic prediction. The proposed method showed greater accuracy in prediction than the two-step methods in all cross-validation schemes tested, suggesting the potential of the integrated approach in

  10. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  11. Generic global regression models for growth prediction of Salmonella in ground pork and pork cuts

    DEFF Research Database (Denmark)

    Buschhardt, Tasja; Hansen, Tina Beck; Bahl, Martin Iain

    2017-01-01

    Introduction and Objectives Models for the prediction of bacterial growth in fresh pork are primarily developed using two-step regression (i.e. primary models followed by secondary models). These models are also generally based on experiments in liquids or ground meat and neglect surface growth....... It has been shown that one-step global regressions can result in more accurate models and that bacterial growth on intact surfaces can substantially differ from growth in liquid culture. Material and Methods We used a global-regression approach to develop predictive models for the growth of Salmonella....... One part of obtained logtransformed cell counts was used for model development and another for model validation. The Ratkowsky square root model and the relative lag time (RLT) model were integrated into the logistic model with delay. Fitted parameter estimates were compared to investigate the effect...

  12. User-Dependent CFD Predictions of a Backward-Facing Step Flow

    DEFF Research Database (Denmark)

    Peng, Lei; Nielsen, Peter Vilhelm; Wang, Xiaoxue

    2015-01-01

    The backward-facing step flow with an expansion ratio of 5 has been modelled by 19 teams without benchmark solution or experimental data. Different CFD codes, turbulence models, boundary conditions, numerical schemes and convergent criteria are adopted based on the participants’ own experience...

  13. Daily step count predicts acute exacerbations in a US cohort with COPD.

    Directory of Open Access Journals (Sweden)

    Marilyn L Moy

    Full Text Available BACKGROUND: COPD is characterized by variability in exercise capacity and physical activity (PA, and acute exacerbations (AEs. Little is known about the relationship between daily step count, a direct measure of PA, and the risk of AEs, including hospitalizations. METHODS: In an observational cohort study of 169 persons with COPD, we directly assessed PA with the StepWatch Activity Monitor, an ankle-worn accelerometer that measures daily step count. We also assessed exercise capacity with the 6-minute walk test (6MWT and patient-reported PA with the St. George's Respiratory Questionnaire Activity Score (SGRQ-AS. AEs and COPD-related hospitalizations were assessed and validated prospectively over a median of 16 months. RESULTS: Mean daily step count was 5804±3141 steps. Over 209 person-years of observation, there were 263 AEs (incidence rate 1.3±1.6 per person-year and 116 COPD-related hospitalizations (incidence rate 0.56±1.09 per person-year. Adjusting for FEV1 % predicted and prednisone use for AE in previous year, for each 1000 fewer steps per day walked at baseline, there was an increased rate of AEs (rate ratio 1.07; 95%CI = 1.003-1.15 and COPD-related hospitalizations (rate ratio 1.24; 95%CI = 1.08-1.42. There was a significant linear trend of decreasing daily step count by quartiles and increasing rate ratios for AEs (P = 0.008 and COPD-related hospitalizations (P = 0.003. Each 30-meter decrease in 6MWT distance was associated with an increased rate ratio of 1.07 (95%CI = 1.01-1.14 for AEs and 1.18 (95%CI = 1.07-1.30 for COPD-related hospitalizations. Worsening of SGRQ-AS by 4 points was associated with an increased rate ratio of 1.05 (95%CI = 1.01-1.09 for AEs and 1.10 (95%CI = 1.02-1.17 for COPD-related hospitalizations. CONCLUSIONS: Lower daily step count, lower 6MWT distance, and worse SGRQ-AS predict future AEs and COPD-related hospitalizations, independent of pulmonary function and previous AE

  14. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  15. Modelling of subcritical free-surface flow over an inclined backward-facing step in a water channel

    Directory of Open Access Journals (Sweden)

    Šulc Jan

    2012-04-01

    Full Text Available The contribution deals with the experimental and numerical modelling of subcritical turbulent flow in an open channel with an inclined backward-facing step. The step with the inclination angle α = 20° was placed in the water channel of the cross-section 200×200 mm. Experiments were carried out by means of the PIV and LDA measuring techniques. Numerical simulations were executed by means of the commercial software ANSYS CFX 12.0. Numerical results obtained for twoequation models and EARSM turbulence model completed by transport equations for turbulent energy and specific dissipation rate were compared with experimental data. The modelling was concentrated particularly on the development of the flow separation and on the corresponding changes of free surface.

  16. Predicting 6- and 12-Month Risk of Mortality in Patients With Platinum-Resistant Advanced-Stage Ovarian Cancer: Prognostic Model to Guide Palliative Care Referrals.

    Science.gov (United States)

    Foote, Jonathan; Lopez-Acevedo, Micael; Samsa, Gregory; Lee, Paula S; Kamal, Arif H; Alvarez Secord, Angeles; Havrilesky, Laura J

    2018-02-01

    Predictive models are increasingly being used in clinical practice. The aim of the study was to develop a predictive model to identify patients with platinum-resistant ovarian cancer with a prognosis of less than 6 to 12 months who may benefit from immediate referral to hospice care. A retrospective chart review identified patients with platinum-resistant epithelial ovarian cancer who were treated at our institution between 2000 and 2011. A predictive model for survival was constructed based on the time from development of platinum resistance to death. Multivariate logistic regression modeling was used to identify significant survival predictors and to develop a predictive model. The following variables were included: time from diagnosis to platinum resistance, initial stage, debulking status, number of relapses, comorbidity score, albumin, hemoglobin, CA-125 levels, liver/lung metastasis, and the presence of a significant clinical event (SCE). An SCE was defined as a malignant bowel obstruction, pleural effusion, or ascites occurring on or before the diagnosis of platinum resistance. One hundred sixty-four patients met inclusion criteria. In the regression analysis, only an SCE and the presence of liver or lung metastasis were associated with poorer short-term survival (P < 0.001). Nine percent of patients with an SCE or liver or lung metastasis survived 6 months or greater and 0% survived 12 months or greater, compared with 85% and 67% of patients without an SCE or liver or lung metastasis, respectively. Patients with platinum-resistant ovarian cancer who have experienced an SCE or liver or lung metastasis have a high risk of death within 6 months and should be considered for immediate referral to hospice care.

  17. Multi-step ahead forecasts for electricity prices using NARX: A new approach, a critical analysis of one-step ahead forecasts

    International Nuclear Information System (INIS)

    Andalib, Arash; Atry, Farid

    2009-01-01

    The prediction of electricity prices is very important to participants of deregulated markets. Among many properties, a successful prediction tool should be able to capture long-term dependencies in market's historical data. A nonlinear autoregressive model with exogenous inputs (NARX) has proven to enjoy a superior performance to capture such dependencies than other learning machines. However, it is not examined for electricity price forecasting so far. In this paper, we have employed a NARX network for forecasting electricity prices. Our prediction model is then compared with two currently used methods, namely the multivariate adaptive regression splines (MARS) and wavelet neural network. All the models are built on the reconstructed state space of market's historical data, which either improves the results or decreases the complexity of learning algorithms. Here, we also criticize the one-step ahead forecasts for electricity price that may suffer a one-term delay and we explain why the mean square error criterion does not guarantee a functional prediction result in this case. To tackle the problem, we pursue multi-step ahead predictions. Results for the Ontario electricity market are presented

  18. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  19. On Feature Relevance in Image-Based Prediction Models: An Empirical Study

    DEFF Research Database (Denmark)

    Konukoglu, E.; Ganz, Melanie; Van Leemput, Koen

    2013-01-01

    Determining disease-related variations of the anatomy and function is an important step in better understanding diseases and developing early diagnostic systems. In particular, image-based multivariate prediction models and the “relevant features” they produce are attracting attention from the co...

  20. On the Role of Chemical Kinetics Modeling in the LES of Premixed Bluff Body and Backward-Facing Step Combustors

    KAUST Repository

    Chakroun, Nadim W.

    2017-01-05

    Recirculating flows in the wake of a bluff body, behind a sudden expansion or down-stream of a swirler, are pivotal for anchoring a flame and expanding the stability range. The size and structure of these recirculation zones and the accurate prediction of the length of these zones is a very important characteristic that computational simulations should have. Large eddy simulation (LES) techniques with an appropriate combustion model and reaction mechanism afford a balance between computational complexity and predictive accuracy. In this study, propane/air mixtures were simulated in a bluff-body stabilized combustor based on the Volvo test case and also in a backward-facing step combustor. The main goal is to investigate the role of the chemical mechanism and the accuracy of estimating the extinction strain rate on the prediction of important ow features such as recirculation zones. Two 2-step mechanisms were employed, one which gave reasonable extinction strain rates and another modi ed 2-step mechanism where it grossly over-predicted the values. This modified mechanism under-predicted recirculation zone lengths compared to the original mechanism and had worse agreement with experiments in both geometries. While the recirculation zone lengths predicted by both reduced mechanisms in the step combustor scale linearly with the extinction strain rate, the scaling curves do not match experimental results as none of the simpli ed mechanisms produce extinction strain rates that are consistent with those predicted by the comprehensive mechanisms. We conclude that it is very important that a chemical mechanism is able to correctly predict extinction strain rates if it is to be used in CFD simulations.

  1. Development and evaluation of multi-agent models predicting Twitter trends in multiple domains

    NARCIS (Netherlands)

    Attema, T.; Maanen, P.P. van; Meeuwissen, E.

    2015-01-01

    This paper concerns multi-agent models predicting Twitter trends. We use a step-wise approach to develop a novel agent-based model with the following properties: (1) it uses individual behavior parameters for a set of Twitter users and (2) it uses a retweet graph to model the underlying social

  2. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  3. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  4. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and fault...... can be used as a test signal for sanity check at the commissioning or for detection of faults hidden by regulatory actions of the controller. The method is tested on the two tank benchmark example. ©2009 IEEE....

  5. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  6. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Iyengar, P; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PET and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining

  7. Validations and improvements of airfoil trailing-edge noise prediction models using detailed experimental data

    DEFF Research Database (Denmark)

    Kamruzzaman, M.; Lutz, Th.; Würz, W.

    2012-01-01

    This paper describes an extensive assessment and a step by step validation of different turbulent boundary-layer trailing-edge noise prediction schemes developed within the European Union funded wind energy project UpWind. To validate prediction models, measurements of turbulent boundary-layer pr...... with measurements in the frequency region higher than 1 kHz, whereas they over-predict the sound pressure level in the low-frequency region. Copyright © 2011 John Wiley & Sons, Ltd.......-layer properties such as two-point turbulent velocity correlations, the spectra of the associated wall pressure fluctuations and the emitted trailing-edge far-field noise were performed in the laminar wind tunnel of the Institute of Aerodynamics and Gas Dynamics, University of Stuttgart. The measurements were...... carried out for a NACA 643-418 airfoil, at Re  =  2.5 ×106, angle of attack of −6° to 6°. Numerical results of different prediction schemes are extensively validated and discussed elaborately. The investigations on the TNO-Blake noise prediction model show that the numerical wall pressure fluctuation...

  8. Development of in Silico Models for Predicting P-Glycoprotein Inhibitors Based on a Two-Step Approach for Feature Selection and Its Application to Chinese Herbal Medicine Screening.

    Science.gov (United States)

    Yang, Ming; Chen, Jialei; Shi, Xiufeng; Xu, Liwen; Xi, Zhijun; You, Lisha; An, Rui; Wang, Xinhong

    2015-10-05

    P-glycoprotein (P-gp) is regarded as an important factor in determining the ADMET (absorption, distribution, metabolism, elimination, and toxicity) characteristics of drugs and drug candidates. Successful prediction of P-gp inhibitors can thus lead to an improved understanding of the underlying mechanisms of both changes in the pharmacokinetics of drugs and drug-drug interactions. Therefore, there has been considerable interest in the development of in silico modeling of P-gp inhibitors in recent years. Considering that a large number of molecular descriptors are used to characterize diverse structural moleculars, efficient feature selection methods are required to extract the most informative predictors. In this work, we constructed an extensive available data set of 2428 molecules that includes 1518 P-gp inhibitors and 910 P-gp noninhibitors from multiple resources. Importantly, a two-step feature selection approach based on a genetic algorithm and a greedy forward-searching algorithm was employed to select the minimum set of the most informative descriptors that contribute to the prediction of P-gp inhibitors. To determine the best machine learning algorithm, 18 classifiers coupled with the feature selection method were compared. The top three best-performing models (flexible discriminant analysis, support vector machine, and random forest) and their ensemble model using respectively only 3, 9, 7, and 14 descriptors achieve an overall accuracy of 83.2%-86.7% for the training set containing 1040 compounds, an overall accuracy of 82.3%-85.5% for the test set containing 1039 compounds, and a prediction accuracy of 77.4%-79.9% for the external validation set containing 349 compounds. The models were further extensively validated by DrugBank database (1890 compounds). The proposed models are competitive with and in some cases better than other published models in terms of prediction accuracy and minimum number of descriptors. Applicability domain then was addressed

  9. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  10. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  11. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  12. Evaluation of different machine learning models for predicting and mapping the susceptibility of gully erosion

    Science.gov (United States)

    Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar

    2017-12-01

    Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.

  13. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  14. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  15. Thermal sensation and thermophysiological responses with metabolic step-changes

    DEFF Research Database (Denmark)

    Goto, Tomonobu; Toftum, Jørn; deDear, Richard

    2006-01-01

    at sedentary activity. In a second experimental series, subjects alternated between rest and exercise as well as between exercise at different intensities at two temperature levels. Measurements comprised skin and oesophageal temperatures, heart rate and subjective responses. Thermal sensation started to rise....... The sensitivity of thermal sensation to changes in core temperature was higher for activity down-steps than for up-steps. A model was proposed that estimates transient thermal sensation after metabolic step-changes. Based on predictions by the model, weighting factors were suggested to estimate a representative...... average metabolic rate with varying activity levels, e.g. for the prediction of thermal sensation by steady-state comfort models. The activity during the most recent 5 min should be weighted 65%, during the prior 10-5 min 25% and during the prior 20-10 min 10%....

  16. The International Geomagnetic Reference Field (IGRF) generation 12: BGS candidates and final models

    OpenAIRE

    Beggan, Ciaran D.; Hamilton, Brian; Taylor, Victoria; Macmillan, Susan; Thomson, Alan

    2015-01-01

    The International Geomagnetic Reference Field (IGRF) model is a reference main field magnetic model updated on a quinquennial basis. The latest revision (generation 12) was released in January 2015. The IGRF-12 consists of a definitive model (DGRF2010) of the main field for 2010.0, a model for the field at 2015.0 (IGRF2015) and a prediction of secular variation (IGRF-12 SV) for the forthcoming five years until 2020.0. The remaining coefficients of IGRF-12 are unchanged from IGRF-11. Nin...

  17. Establishment of a 12-gene expression signature to predict colon cancer prognosis

    Directory of Open Access Journals (Sweden)

    Dalong Sun

    2018-06-01

    Full Text Available A robust and accurate gene expression signature is essential to assist oncologists to determine which subset of patients at similar Tumor-Lymph Node-Metastasis (TNM stage has high recurrence risk and could benefit from adjuvant therapies. Here we applied a two-step supervised machine-learning method and established a 12-gene expression signature to precisely predict colon adenocarcinoma (COAD prognosis by using COAD RNA-seq transcriptome data from The Cancer Genome Atlas (TCGA. The predictive performance of the 12-gene signature was validated with two independent gene expression microarray datasets: GSE39582 includes 566 COAD cases for the development of six molecular subtypes with distinct clinical, molecular and survival characteristics; GSE17538 is a dataset containing 232 colon cancer patients for the generation of a metastasis gene expression profile to predict recurrence and death in COAD patients. The signature could effectively separate the poor prognosis patients from good prognosis group (disease specific survival (DSS: Kaplan Meier (KM Log Rank p = 0.0034; overall survival (OS: KM Log Rank p = 0.0336 in GSE17538. For patients with proficient mismatch repair system (pMMR in GSE39582, the signature could also effectively distinguish high risk group from low risk group (OS: KM Log Rank p = 0.005; Relapse free survival (RFS: KM Log Rank p = 0.022. Interestingly, advanced stage patients were significantly enriched in high 12-gene score group (Fisher’s exact test p = 0.0003. After stage stratification, the signature could still distinguish poor prognosis patients in GSE17538 from good prognosis within stage II (Log Rank p = 0.01 and stage II & III (Log Rank p = 0.017 in the outcome of DFS. Within stage III or II/III pMMR patients treated with Adjuvant Chemotherapies (ACT and patients with higher 12-gene score showed poorer prognosis (III, OS: KM Log Rank p = 0.046; III & II, OS: KM Log Rank p = 0.041. Among stage II/III pMMR patients

  18. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  19. Model Predictive Control-based gait pattern generation for wearable exoskeletons.

    Science.gov (United States)

    Wang, Letian; van Asseldonk, Edwin H F; van der Kooij, Herman

    2011-01-01

    This paper introduces a new method for controlling wearable exoskeletons that do not need predefined joint trajectories. Instead, it only needs basic gait descriptors such as step length, swing duration, and walking speed. End point Model Predictive Control (MPC) is used to generate the online joint trajectories based on these gait parameters. Real-time ability and control performance of the method during the swing phase of gait cycle is studied in this paper. Experiments are performed by helping a human subject swing his leg with different patterns in the LOPES gait trainer. Results show that the method is able to assist subjects to make steps with different step length and step duration without predefined joint trajectories and is fast enough for real-time implementation. Future study of the method will focus on controlling the exoskeletons in the entire gait cycle. © 2011 IEEE

  20. Prediction Equations of Energy Expenditure in Chinese Youth Based on Step Frequency during Walking and Running

    Science.gov (United States)

    Sun, Bo; Liu, Yu; Li, Jing Xian; Li, Haipeng; Chen, Peijie

    2013-01-01

    Purpose: This study set out to examine the relationship between step frequency and velocity to develop a step frequency-based equation to predict Chinese youth's energy expenditure (EE) during walking and running. Method: A total of 173 boys and girls aged 11 to 18 years old participated in this study. The participants walked and ran on a…

  1. Moderate Traumatic Brain Injury: Clinical Characteristics and a Prognostic Model of 12-Month Outcome.

    Science.gov (United States)

    Einarsen, Cathrine Elisabeth; van der Naalt, Joukje; Jacobs, Bram; Follestad, Turid; Moen, Kent Gøran; Vik, Anne; Håberg, Asta Kristine; Skandsen, Toril

    2018-03-31

    Patients with moderate traumatic brain injury (TBI) often are studied together with patients with severe TBI, even though the expected outcome of the former is better. Therefore, we aimed to describe patient characteristics and 12-month outcomes, and to develop a prognostic model based on admission data, specifically for patients with moderate TBI. Patients with Glasgow Coma Scale scores of 9-13 and age ≥16 years were prospectively enrolled in 2 level I trauma centers in Europe. Glasgow Outcome Scale Extended (GOSE) score was assessed at 12 months. A prognostic model predicting moderate disability or worse (GOSE score ≤6), as opposed to a good recovery, was fitted by penalized regression. Model performance was evaluated by area under the curve of the receiver operating characteristics curves. Of the 395 enrolled patients, 81% had intracranial lesions on head computed tomography, and 71% were admitted to an intensive care unit. At 12 months, 44% were moderately disabled or worse (GOSE score ≤6), whereas 8% were severely disabled and 6% died (GOSE score ≤4). Older age, lower Glasgow Coma Scale score, no day-of-injury alcohol intoxication, presence of a subdural hematoma, occurrence of hypoxia and/or hypotension, and preinjury disability were significant predictors of GOSE score ≤6 (area under the curve = 0.80). Patients with moderate TBI exhibit characteristics of significant brain injury. Although few patients died or experienced severe disability, 44% did not experience good recovery, indicating that follow-up is needed. The model is a first step in development of prognostic models for moderate TBI that are valid across centers. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  2. Modeling the stepping mechanism in negative lightning leaders

    Science.gov (United States)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  3. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  4. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    Science.gov (United States)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  5. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  6. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  7. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  8. Screening Tool for Early Postnatal Prediction of Retinopathy of Prematurity in Preterm Newborns (STEP-ROP).

    Science.gov (United States)

    Ricard, Caroline A; Dammann, Christiane E L; Dammann, Olaf

    2017-01-01

    Retinopathy of prematurity (ROP) is a disorder of the preterm newborn characterized by neurovascular disruption in the immature retina that may cause visual impairment and blindness. To develop a clinical screening tool for early postnatal prediction of ROP in preterm newborns based on risk information available within the first 48 h of postnatal life. Using data submitted to the Vermont Oxford Network (VON) between 1995 and 2015, we created logistic regression models based on infants born <28 completed weeks gestational age. We developed a model with 60% of the data and identified birth weight, gestational age, respiratory distress syndrome, non-Hispanic ethnicity, and multiple gestation as predictors of ROP. We tested the model in the remaining 40%, performed tenfold cross-validation, and tested the score in ELGAN study data. Of the 1,052 newborns in the VON database, 627 recorded an ROP status. Forty percent had no ROP, 40% had mild ROP (stages 1 and 2), and 20% had severe ROP (stages 3-5). We created a weighted score to predict any ROP based on the multivariable regression model. A cutoff score of 5 had the best sensitivity (95%, 95% CI 93-97), while maintaining a strong positive predictive value (63%, 95% CI 57-68). When applied to the ELGAN data, sensitivity was lower (72%, 95% CI 69-75), but PPV was higher (80%, 95% CI 77-83). STEP-ROP is a promising screening tool. It is easy to calculate, does not rely on extensive postnatal data collection, and can be calculated early after birth. Early ROP screening may help physicians limit patient exposure to additional risk factors, and may be useful for risk stratification in clinical trials aimed at reducing ROP. © 2017 S. Karger AG, Basel.

  9. Phenobarbital in intensive care unit pediatric population: predictive performances of population pharmacokinetic model.

    Science.gov (United States)

    Marsot, Amélie; Michel, Fabrice; Chasseloup, Estelle; Paut, Olivier; Guilhaumou, Romain; Blin, Olivier

    2017-10-01

    An external evaluation of phenobarbital population pharmacokinetic model described by Marsot et al. was performed in pediatric intensive care unit. Model evaluation is an important issue for dose adjustment. This external evaluation should allow confirming the proposed dosage adaptation and extending these recommendations to the entire intensive care pediatric population. External evaluation of phenobarbital published population pharmacokinetic model of Marsot et al. was realized in a new retrospective dataset of 35 patients hospitalized in a pediatric intensive care unit. The published population pharmacokinetic model was implemented in nonmem 7.3. Predictive performance was assessed by quantifying bias and inaccuracy of model prediction. Normalized prediction distribution errors (NPDE) and visual predictive check (VPC) were also evaluated. A total of 35 infants were studied with a mean age of 33.5 weeks (range: 12 days-16 years) and a mean weight of 12.6 kg (range: 2.7-70.0 kg). The model predicted the observed phenobarbital concentrations with a reasonable bias and inaccuracy. The median prediction error was 3.03% (95% CI: -8.52 to 58.12%), and the median absolute prediction error was 26.20% (95% CI: 13.07-75.59%). No trends in NPDE and VPC were observed. The model previously proposed by Marsot et al. in neonates hospitalized in intensive care unit was externally validated for IV infusion administration. The model-based dosing regimen was extended in all pediatric intensive care unit to optimize treatment. Due to inter- and intravariability in pharmacokinetic model, this dosing regimen should be combined with therapeutic drug monitoring. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  10. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    Science.gov (United States)

    Dessens, Olivier

    2016-04-01

    analysing GCMs with the step-experiments. Acknowledgments: This work is supported by the FP7 HELIX project (www.helixclimate.eu) References: Anandarajah, G., Pye, S., Usher, W., Kesicki, F., & Mcglade, C. (2011). TIAM-UCL Global model documentation. https://www.ucl.ac.uk/energy-models/models/tiam-ucl/tiam-ucl-manual Good, P., Gregory, J. M., Lowe, J. A., & Andrews, T. (2013). Abrupt CO2 experiments as tools for predicting and understanding CMIP5 representative concentration pathway projections. Climate Dynamics, 40(3-4), 1041-1053.

  11. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  12. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  13. Long-term prediction of chaotic time series with multi-step prediction horizons by a neural network with Levenberg-Marquardt learning algorithm

    International Nuclear Information System (INIS)

    Mirzaee, Hossein

    2009-01-01

    The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.

  14. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  15. A prediction model for the radiation safety management behavior of medical cyclotrons

    International Nuclear Information System (INIS)

    Jung, Ji Hye; Han, Eun Ok; Kim, Ssang Tae

    2008-01-01

    This study attempted to provide reference materials for improving the behavior level in radiation safety managements by drawing a prediction model that affects the radiation safety management behavior because the radiation safety management of medical Cyclotrons, which can be used to produce radioisotopes, is an important factor that protects radiation caused diseases not only for radiological operators but average users. In addition, this study obtained follows results through the investigation applied from January 2 to January 30, 2008 for the radiation safety managers employed in 24 authorized organizations, which have already installed Cyclotrons, through applying a specific form of questionnaire in which the validity was guaranteed by reference study, site investigation, and focus discussion by related experts. The radiation safety management were configured as seven steps: step 1 is a production preparation step, step 2 is an RI production step, step 3 is a synthesis step, step 4 is a distribution step, step 5 is a quality control step, step 6 is a carriage container packing step, and step 7 is a transportation step. It was recognized that the distribution step was the most exposed as 15 subjects (62.5%), the items of 'the sanction and permission related works' and 'the guarantee of installation facilities and production equipment' were the most difficult as 9 subjects (37.5%), and in the trouble steps in such exposure, the item of 'the synthesis and distribution' steps were 4 times, respectively (30.8%). In the score of the behavior level in radiation safety managements, the minimum and maximum scores were 2.42 and 4.00, respectively, and the average score was 3.46 ± 0.47 out of 4. Prosperity and well-being programs in the behavior and job in radiation safety managements (r=0.529) represented a significant correlation statistically. In the drawing of a prediction model based on the factors that affected the behavior in radiation safety managements, general

  16. SU-C-BRF-07: A Pattern Fusion Algorithm for Multi-Step Ahead Prediction of Surrogate Motion

    International Nuclear Information System (INIS)

    Zawisza, I; Yan, H; Yin, F

    2014-01-01

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogate signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction

  17. The electrical resistivity of rough thin films: A model based on electron reflection at discrete step edges

    Science.gov (United States)

    Zhou, Tianji; Zheng, Pengyuan; Pandey, Sumeet C.; Sundararaman, Ravishankar; Gall, Daniel

    2018-04-01

    The effect of the surface roughness on the electrical resistivity of metallic thin films is described by electron reflection at discrete step edges. A Landauer formalism for incoherent scattering leads to a parameter-free expression for the resistivity contribution from surface mound-valley undulations that is additive to the resistivity associated with bulk and surface scattering. In the classical limit where the electron reflection probability matches the ratio of the step height h divided by the film thickness d, the additional resistivity Δρ = √{3 /2 } /(g0d) × ω/ξ, where g0 is the specific ballistic conductance and ω/ξ is the ratio of the root-mean-square surface roughness divided by the lateral correlation length of the surface morphology. First-principles non-equilibrium Green's function density functional theory transport simulations on 1-nm-thick Cu(001) layers validate the model, confirming that the electron reflection probability is equal to h/d and that the incoherent formalism matches the coherent scattering simulations for surface step separations ≥2 nm. Experimental confirmation is done using 4.5-52 nm thick epitaxial W(001) layers, where ω = 0.25-1.07 nm and ξ = 10.5-21.9 nm are varied by in situ annealing. Electron transport measurements at 77 and 295 K indicate a linear relationship between Δρ and ω/(ξd), confirming the model predictions. The model suggests a stronger resistivity size effect than predictions of existing models by Fuchs [Math. Proc. Cambridge Philos. Soc. 34, 100 (1938)], Sondheimer [Adv. Phys. 1, 1 (1952)], Rossnagel and Kuan [J. Vac. Sci. Technol., B 22, 240 (2004)], or Namba [Jpn. J. Appl. Phys., Part 1 9, 1326 (1970)]. It provides a quantitative explanation for the empirical parameters in these models and may explain the recently reported deviations of experimental resistivity values from these models.

  18. Multi-pentad prediction of precipitation variability over Southeast Asia during boreal summer using BCC_CSM1.2

    Science.gov (United States)

    Li, Chengcheng; Ren, Hong-Li; Zhou, Fang; Li, Shuanglin; Fu, Joshua-Xiouhua; Li, Guoping

    2018-06-01

    Precipitation is highly variable in space and discontinuous in time, which makes it challenging for models to predict on subseasonal scales (10-30 days). We analyze multi-pentad predictions from the Beijing Climate Center Climate System Model version 1.2 (BCC_CSM1.2), which are based on hindcasts from 1997 to 2014. The analysis focus on the skill of the model to predict precipitation variability over Southeast Asia from May to September, as well as its connections with intraseasonal oscillation (ISO). The effective precipitation prediction length is about two pentads (10 days), during which the skill measured by anomaly correlation is greater than 0.1. In order to further evaluate the performance of the precipitation prediction, the diagnosis results of the skills of two related circulation fields show that the prediction skills for the circulation fields exceed that of precipitation. Moreover, the prediction skills tend to be higher when the amplitude of ISO is large, especially for a boreal summer intraseasonal oscillation. The skills associated with phases 2 and 5 are higher, but that of phase 3 is relatively lower. Even so, different initial phases reflect the same spatial characteristics, which shows higher skill of precipitation prediction in the northwest Pacific Ocean. Finally, filter analysis is used on the prediction skills of total and subseasonal anomalies. The results of the two anomaly sets are comparable during the first two lead pentads, but thereafter the skill of the total anomalies is significantly higher than that of the subseasonal anomalies. This paper should help advance research in subseasonal precipitation prediction.

  19. Statistical modeling of tear strength for one step fixation process of reactive printing and easy care finishing

    International Nuclear Information System (INIS)

    Asim, F.; Mahmood, M.

    2017-01-01

    Statistical modeling imparts significant role in predicting the impact of potential factors affecting the one step fixation process of reactive printing and easy care finishing. Investigation of significant factors on tear strength of cotton fabric for single step fixation of reactive printing and easy care finishing has been carried out in this research work using experimental design technique. The potential design factors were; concentration of reactive dye, concentration of crease resistant, fixation method and fixation temperature. The experiments were designed using DoE (Design of Experiment) and analyzed through software Design Expert. The detailed analysis of significant factors and interactions including ANOVA (Analysis of Variance), residuals, model accuracy and statistical model for tear strength has been presented. The interaction and contour plots of vital factors has been examined. It has been found from the statistical analysis that each factor has an interaction with other factor. Most of the investigated factors showed curvature effect on other factor. After critical examination of significant plots, quadratic model of tear strength with significant terms and their interaction at alpha = 0.05 has been developed. The calculated correlation coefficient, R2 of the developed model is 0.9056. The high values of correlation coefficient inferred that developed equation of tear strength will precisely predict the tear strength over the range of values. (author)

  20. Development of a prognostic model for predicting spontaneous singleton preterm birth.

    Science.gov (United States)

    Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen

    2012-10-01

    To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (pvalues of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Interpretable Predictive Models for Knowledge Discovery from Home-Care Electronic Health Records

    Directory of Open Access Journals (Sweden)

    Bonnie L. Westra

    2011-01-01

    Full Text Available The purpose of this methodological study was to compare methods of developing predictive rules that are parsimonious and clinically interpretable from electronic health record (EHR home visit data, contrasting logistic regression with three data mining classification models. We address three problems commonly encountered in EHRs: the value of including clinically important variables with little variance, handling imbalanced datasets, and ease of interpretation of the resulting predictive models. Logistic regression and three classification models using Ripper, decision trees, and Support Vector Machines were applied to a case study for one outcome of improvement in oral medication management. Predictive rules for logistic regression, Ripper, and decision trees are reported and results compared using F-measures for data mining models and area under the receiver-operating characteristic curve for all models. The rules generated by the three classification models provide potentially novel insights into mining EHRs beyond those provided by standard logistic regression, and suggest steps for further study.

  2. HIA, the next step: Defining models and roles

    International Nuclear Information System (INIS)

    Putters, Kim

    2005-01-01

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways and responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking

  3. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  4. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  5. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  6. Risk prediction model for knee pain in the Nottingham community: a Bayesian modelling approach.

    Science.gov (United States)

    Fernandes, G S; Bhattacharya, A; McWilliams, D F; Ingham, S L; Doherty, M; Zhang, W

    2017-03-20

    Twenty-five percent of the British population over the age of 50 years experiences knee pain. Knee pain can limit physical ability and cause distress and bears significant socioeconomic costs. The objectives of this study were to develop and validate the first risk prediction model for incident knee pain in the Nottingham community and validate this internally within the Nottingham cohort and externally within the Osteoarthritis Initiative (OAI) cohort. A total of 1822 participants from the Nottingham community who were at risk for knee pain were followed for 12 years. Of this cohort, two-thirds (n = 1203) were used to develop the risk prediction model, and one-third (n = 619) were used to validate the model. Incident knee pain was defined as pain on most days for at least 1 month in the past 12 months. Predictors were age, sex, body mass index, pain elsewhere, prior knee injury and knee alignment. A Bayesian logistic regression model was used to determine the probability of an OR >1. The Hosmer-Lemeshow χ 2 statistic (HLS) was used for calibration, and ROC curve analysis was used for discrimination. The OAI cohort from the United States was also used to examine the performance of the model. A risk prediction model for knee pain incidence was developed using a Bayesian approach. The model had good calibration, with an HLS of 7.17 (p = 0.52) and moderate discriminative ability (ROC 0.70) in the community. Individual scenarios are given using the model. However, the model had poor calibration (HLS 5866.28, p prediction model for knee pain, regardless of underlying structural changes of knee osteoarthritis, in the community using a Bayesian modelling approach. The model appears to work well in a community-based population but not in individuals with a higher risk for knee osteoarthritis, and it may provide a convenient tool for use in primary care to predict the risk of knee pain in the general population.

  7. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  8. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Fundamental modelling of particle strengthened 9-12% Cr steels

    Energy Technology Data Exchange (ETDEWEB)

    Magnusson, Hans; Sandstroem, Rolf [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Materials Science and Engineering; Royal Inst. of Tech., Stockholm (Sweden). Brinell Centre

    2010-07-01

    Creep strength of particle strengthened 9-12% Cr steels can be predicted by fundamental modelling. The creep strength is evaluated from the state of the microstructure during creep. Particle hardening at high temperatures can be predicted by taking dislocation climb across particles into account. Work hardening is calculated from immobile dislocations in subgrain interiors and at boundaries using the composite theory. Subgrain coarsening will lower the influence of the mechanically hard boundaries. Recovery in dislocation density is predicted through static recovery by climb and dynamic recovery by locking and dipole formation. Solid solution hardening is needed in order to explain the difference in creep strength between different 9-12% Cr steels. The accumulation of large atoms such as Mo and W will slow down the dislocation climb velocity, and thereby the microstructure recovery rate. 100000h rupture strength is predicted for X20, P91, P92 steels without any use of fitting parameters. The creep strength of P91 steel with different microstructure due to Al additions. Z-phase transformation and heat affected material is presented. (orig.)

  10. Temporal step fluctuations on a conductor surface: electromigration force, surface resistivity and low-frequency noise

    International Nuclear Information System (INIS)

    Williams, E D; Bondarchuk, O; Tao, C G; Yan, W; Cullen, W G; Rous, P J; Bole, T

    2007-01-01

    Scattering of charge carriers from surface structures will become an increasing factor in the resistivity as the structure decreases in size to the nanoscale. The effects of scattering at the most basic surface defect, a kink in a step edge, are here analyzed using the continuum step model. Using a Langevin analysis, it has been shown that the electromigration force on the atoms at the step edge causes changes in the temporal evolution of the step-edge. For an electromigration force acting perpendicular to the average step edge and mass-transport dominated by step-edge diffusion, significant deviations from the usual t 1/4 scaling of the displacement correlation function occur dependent on a critical time τ and the direction of the force relative to the step edge (i.e. uphill or downhill). Experimental observations of step fluctuations on Ag(111) show the predicted changes among step fluctuations without current, and with current in the up- and down-hill directions for a current density of order 10 5 A cm -2 . The results yield the magnitude of the electromigration force acting on kinked sites at the step-edge. This in turn yields the contribution of the fluctuating steps to the surface resistivity, which exceeds 1% of the bulk resistivity as wire diameters decrease below 10s of nanometres. The temporal fluctuations of kink density can thus also be related to resistivity noise. Relating the known fluctuation spectrum of the step displacements to fluctuations in their lengths, the corresponding resistivity noise is predicted to show spectral signatures of ∼f -1/2 for step fluctuations governed by random attachment/detachment, and ∼f -3/4 for step fluctuations governed by step-edge diffusion

  11. Application of single-step genomic best linear unbiased prediction with a multiple-lactation random regression test-day model for Japanese Holsteins.

    Science.gov (United States)

    Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi

    2017-08-01

    This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.

  12. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  13. Current and Future Tests of the Algebraic Cluster Model of12C

    Science.gov (United States)

    Gai, Moshe

    2017-07-01

    A new theoretical approach to clustering in the frame of the Algebraic Cluster Model (ACM) has been developed. It predicts, in12C, rotation-vibration structure with rotational bands of an oblate equilateral triangular symmetric spinning top with a D 3h symmetry characterized by the sequence of states: 0+, 2+, 3-, 4±, 5- with a degenerate 4+ and 4- (parity doublet) states. Our newly measured {2}2+ state in12C allows the first study of rotation-vibration structure in12C. The newly measured 5- state and 4- states fit very well the predicted ground state rotational band structure with the predicted sequence of states: 0+, 2+, 3-, 4±, 5- with almost degenerate 4+ and 4- (parity doublet) states. Such a D 3h symmetry is characteristic of triatomic molecules, but it is observed in the ground state rotational band of12C for the first time in a nucleus. We discuss predictions of the ACM of other rotation-vibration bands in12C such as the (0+) Hoyle band and the (1-) bending mode with prediction of (“missing 3- and 4-”) states that may shed new light on clustering in12C and light nuclei. In particular, the observation (or non observation) of the predicted (“missing”) states in the Hoyle band will allow us to conclude the geometrical arrangement of the three alpha particles composing the Hoyle state at 7.6542 MeV in12C. We discuss proposed research programs at the Darmstadt S- DALINAC and at the newly constructed ELI-NP facility near Bucharest to test the predictions of the ACM in isotopes of carbon.

  14. Factors associated with attendance in 12-step groups (Alcoholics Anonymous/Narcotics Anonymous) among adults with alcohol problems living with HIV/AIDS.

    Science.gov (United States)

    Orwat, John; Samet, Jeffrey H; Tompkins, Christopher P; Cheng, Debbie M; Dentato, Michael P; Saitz, Richard

    2011-01-15

    Despite the value of 12-step meetings, few studies have examined factors associated with attendance among those living with HIV/AIDS, such as the impact of HIV disease severity and demographics. This study examines predisposing characteristics, enabling resources and need on attendance at Alcoholic Anonymous (AA) and Narcotics Anonymous (NA) meetings among those living with HIV/AIDS and alcohol problems. Secondary analysis of prospective data from the HIV-Longitudinal Interrelationships of Viruses and Ethanol study, a cohort of 400 adults living with HIV/AIDS and alcohol problems. Factors associated with AA/NA attendance were identified using the Anderson model for vulnerable populations. Generalized estimating equation logistic regression models were fit to identify factors associated with self-reported AA/NA attendance. At study entry, subjects were 75% male, 12% met diagnostic criteria for alcohol dependence, 43% had drug dependence and 56% reported attending one or more AA/NA meetings (past 6 months). In the adjusted model, female gender negatively associated with attendance, as were social support systems that use alcohol and/or drugs, while presence of HCV antibody, drug dependence diagnosis, and homelessness associated with higher odds of attendance. Non-substance abuse related barriers to AA/NA group attendance exist for those living with HIV/AIDS, including females and social support systems that use alcohol and/or drugs. Positive associations of homelessness, HCV infection and current drug dependence were identified. These findings provide implications for policy makers and treatment professionals who wish to encourage attendance at 12-step meetings for those living with HIV/AIDS and alcohol or other substance use problems. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Predicting Defects Using Information Intelligence Process Models in the Software Technology Project.

    Science.gov (United States)

    Selvaraj, Manjula Gandhi; Jayabal, Devi Shree; Srinivasan, Thenmozhi; Balasubramanie, Palanisamy

    2015-01-01

    A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%-80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects.

  16. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    Science.gov (United States)

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  17. A prediction model for the radiation safety management behavior of medical cyclotrons

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Ji Hye; Han, Eun Ok [Daegu Health College, Daegu (Korea, Republic of); Kim, Ssang Tae [CareCamp Inc., Seoul (Korea, Republic of)

    2008-06-15

    This study attempted to provide reference materials for improving the behavior level in radiation safety managements by drawing a prediction model that affects the radiation safety management behavior because the radiation safety management of medical Cyclotrons, which can be used to produce radioisotopes, is an important factor that protects radiation caused diseases not only for radiological operators but average users. In addition, this study obtained follows results through the investigation applied from January 2 to January 30, 2008 for the radiation safety managers employed in 24 authorized organizations, which have already installed Cyclotrons, through applying a specific form of questionnaire in which the validity was guaranteed by reference study, site investigation, and focus discussion by related experts. The radiation safety management were configured as seven steps: step 1 is a production preparation step, step 2 is an RI production step, step 3 is a synthesis step, step 4 is a distribution step, step 5 is a quality control step, step 6 is a carriage container packing step, and step 7 is a transportation step. It was recognized that the distribution step was the most exposed as 15 subjects (62.5%), the items of 'the sanction and permission related works' and 'the guarantee of installation facilities and production equipment' were the most difficult as 9 subjects (37.5%), and in the trouble steps in such exposure, the item of 'the synthesis and distribution' steps were 4 times, respectively (30.8%). In the score of the behavior level in radiation safety managements, the minimum and maximum scores were 2.42 and 4.00, respectively, and the average score was 3.46 {+-} 0.47 out of 4. Prosperity and well-being programs in the behavior and job in radiation safety managements (r=0.529) represented a significant correlation statistically. In the drawing of a prediction model based on the factors that affected the behavior in

  18. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc; Impact de modeles de melange implantes dans le code de thermohydraulique Thyc sur les predictions de flux critique

    Energy Technology Data Exchange (ETDEWEB)

    Banner, D

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs.

  19. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    During the last fifteen years, a very large effort to standardize the product models employed in product design, manufacturing and other life-cycle phases has been undertaken. This effort has the acronym STEP, and resulted in the International Standard ISO-10303 "Industrial Automation Systems...... - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  20. Electric-current-induced step bunching on Si(111)

    International Nuclear Information System (INIS)

    Homma, Yoshikazu; Aizawa, Noriyuki

    2000-01-01

    We experimentally investigated step bunching induced by direct current on vicinal Si(111)'1x1' surfaces using scanning electron microscopy and atomic force microscopy. The scaling relation between the average step spacing l b and the number of steps N in a bunch, l b ∼N -α , was determined for four step-bunching temperature regimes above the 7x7-'1x1' transition temperature. The step-bunching rate and scaling exponent differ between neighboring step-bunching regimes. The exponent α is 0.7 for the two regimes where the step-down current induces step bunching (860-960 and 1210-1300 deg. C), and 0.6 for the two regimes where the step-up current induces step bunching (1060-1190 and >1320 deg. C). The number of single steps on terraces also differs in each of the four temperature regimes. For temperatures higher than 1280 deg. C, the prefactor of the scaling relation increases, indicating an increase in step-step repulsion. The scaling exponents obtained agree reasonably well with those predicted by theoretical models. However, they give unrealistic values for the effective charges of adatoms for step-up-current-induced step bunching when the 'transparent' step model is used

  1. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  2. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  3. Combining multiple models to generate consensus: Application to radiation-induced pneumonitis prediction

    Energy Technology Data Exchange (ETDEWEB)

    Das, Shiva K.; Chen Shifeng; Deasy, Joseph O.; Zhou Sumin; Yin Fangfang; Marks, Lawrence B. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599 (United States)

    2008-11-15

    The fusion of predictions from disparate models has been used in several fields to obtain a more realistic and robust estimate of the ''ground truth'' by allowing the models to reinforce each other when consensus exists, or, conversely, negate each other when there is no consensus. Fusion has been shown to be most effective when the models have some complementary strengths arising from different approaches. In this work, we fuse the results from four common but methodologically different nonlinear multivariate models (Decision Trees, Neural Networks, Support Vector Machines, Self-Organizing Maps) that were trained to predict radiation-induced pneumonitis risk on a database of 219 lung cancer patients treated with radiotherapy (34 with Grade 2+ postradiotherapy pneumonitis). Each model independently incorporated a small number of features from the available set of dose and nondose patient variables to predict pneumonitis; no two models had all features in common. Fusion was achieved by simple averaging of the predictions for each patient from all four models. Since a model's prediction for a patient can be dependent on the patient training set used to build the model, the average of several different predictions from each model was used in the fusion (predictions were made by repeatedly testing each patient with a model built from different cross-validation training sets that excluded the patient being tested). The area under the receiver operating characteristics curve for the fused cross-validated results was 0.79, with lower variance than the individual component models. From the fusion, five features were extracted as the consensus among all four models in predicting radiation pneumonitis. Arranged in order of importance, the features are (1) chemotherapy; (2) equivalent uniform dose (EUD) for exponent a=1.2 to 3; (3) EUD for a=0.5 to 1.2, lung volume receiving >20-30 Gy; (4) female sex; and (5) squamous cell histology. To facilitate

  4. Dynamic Predictive Model for Growth of Bacillus cereus from Spores in Cooked Beans.

    Science.gov (United States)

    Juneja, Vijay K; Mishra, Abhinav; Pradhan, Abani K

    2018-02-01

    Kinetic growth data for Bacillus cereus grown from spores were collected in cooked beans under several isothermal conditions (10 to 49°C). Samples were inoculated with approximately 2 log CFU/g heat-shocked (80°C for 10 min) spores and stored at isothermal temperatures. B. cereus populations were determined at appropriate intervals by plating on mannitol-egg yolk-polymyxin agar and incubating at 30°C for 24 h. Data were fitted into Baranyi, Huang, modified Gompertz, and three-phase linear primary growth models. All four models were fitted to the experimental growth data collected at 13 to 46°C. Performances of these models were evaluated based on accuracy and bias factors, the coefficient of determination ( R 2 ), and the root mean square error. Based on these criteria, the Baranyi model best described the growth data, followed by the Huang, modified Gompertz, and three-phase linear models. The maximum growth rates of each primary model were fitted as a function of temperature using the modified Ratkowsky model. The high R 2 values (0.95 to 0.98) indicate that the modified Ratkowsky model can be used to describe the effect of temperature on the growth rates for all four primary models. The acceptable prediction zone (APZ) approach also was used for validation of the model with observed data collected during single and two-step dynamic cooling temperature protocols. When the predictions using the Baranyi model were compared with the observed data using the APZ analysis, all 24 observations for the exponential single rate cooling were within the APZ, which was set between -0.5 and 1 log CFU/g; 26 of 28 predictions for the two-step cooling profiles also were within the APZ limits. The developed dynamic model can be used to predict potential B. cereus growth from spores in beans under various temperature conditions or during extended chilling of cooked beans.

  5. Predictive modeling in e-mental health: A common language framework

    Directory of Open Access Journals (Sweden)

    Dennis Becker

    2018-06-01

    Full Text Available Recent developments in mobile technology, sensor devices, and artificial intelligence have created new opportunities for mental health care research. Enabled by large datasets collected in e-mental health research and practice, clinical researchers and members of the data mining community increasingly join forces to build predictive models for health monitoring, treatment selection, and treatment personalization. This paper aims to bridge the historical and conceptual gaps between the distant research domains involved in this new collaborative research by providing a conceptual model of common research goals. We first provide a brief overview of the data mining field and methods used for predictive modeling. Next, we propose to characterize predictive modeling research in mental health care on three dimensions: 1 time, relative to treatment (i.e., from screening to post-treatment relapse monitoring, 2 types of available data (e.g., questionnaire data, ecological momentary assessments, smartphone sensor data, and 3 type of clinical decision (i.e., whether data are used for screening purposes, treatment selection or treatment personalization. Building on these three dimensions, we introduce a framework that identifies four model types that can be used to classify existing and future research and applications. To illustrate this, we use the framework to classify and discuss published predictive modeling mental health research. Finally, in the discussion, we reflect on the next steps that are required to drive forward this promising new interdisciplinary field.

  6. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  7. Model Predictive Control of a Wave Energy Converter with Discrete Fluid Power Power Take-Off System

    DEFF Research Database (Denmark)

    Hansen, Anders Hedegaard; Asmussen, Magnus Færing; Bech, Michael Møller

    2018-01-01

    Wave power extraction algorithms for wave energy converters are normally designed without taking system losses into account leading to suboptimal power extraction. In the current work, a model predictive power extraction algorithm is designed for a discretized power take of system. It is shown how...... the quantized nature of a discrete fluid power system may be included in a new model predictive control algorithm leading to a significant increase in the harvested power. A detailed investigation of the influence of the prediction horizon and the time step is reported. Furthermore, it is shown how...

  8. An etiologic prediction model incorporating biomarkers to predict the bladder cancer risk associated with occupational exposure to aromatic amines: a pilot study.

    Science.gov (United States)

    Mastrangelo, Giuseppe; Carta, Angela; Arici, Cecilia; Pavanello, Sofia; Porru, Stefano

    2017-01-01

    No etiological prediction model incorporating biomarkers is available to predict bladder cancer risk associated with occupational exposure to aromatic amines. Cases were 199 bladder cancer patients. Clinical, laboratory and genetic data were predictors in logistic regression models (full and short) in which the dependent variable was 1 for 15 patients with aromatic amines related bladder cancer and 0 otherwise. The receiver operating characteristics approach was adopted; the area under the curve was used to evaluate discriminatory ability of models. Area under the curve was 0.93 for the full model (including age, smoking and coffee habits, DNA adducts, 12 genotypes) and 0.86 for the short model (including smoking, DNA adducts, 3 genotypes). Using the "best cut-off" of predicted probability of a positive outcome, percentage of cases correctly classified was 92% (full model) against 75% (short model). Cancers classified as "positive outcome" are those to be referred for evaluation by an occupational physician for etiological diagnosis; these patients were 28 (full model) or 60 (short model). Using 3 genotypes instead of 12 can double the number of patients with suspect of aromatic amine related cancer, thus increasing costs of etiologic appraisal. Integrating clinical, laboratory and genetic factors, we developed the first etiologic prediction model for aromatic amine related bladder cancer. Discriminatory ability was excellent, particularly for the full model, allowing individualized predictions. Validation of our model in external populations is essential for practical use in the clinical setting.

  9. [Application of ARIMA model to predict number of malaria cases in China].

    Science.gov (United States)

    Hui-Yu, H; Hua-Qin, S; Shun-Xian, Z; Lin, A I; Yan, L U; Yu-Chun, C; Shi-Zhu, L I; Xue-Jiao, T; Chun-Li, Y; Wei, H U; Jia-Xu, C

    2017-08-15

    Objective To study the application of autoregressive integrated moving average (ARIMA) model to predict the monthly reported malaria cases in China, so as to provide a reference for prevention and control of malaria. Methods SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported malaria cases of the time series of 20062015 and 2011-2015, respectively. The data of malaria cases from January to December, 2016 were used as validation data to compare the accuracy of the two ARIMA models. Results The models of the monthly reported cases of malaria in China were ARIMA (2, 1, 1) (1, 1, 0) 12 and ARIMA (1, 0, 0) (1, 1, 0) 12 respectively. The comparison between the predictions of the two models and actual situation of malaria cases showed that the ARIMA model based on the data of 2011-2015 had a higher accuracy of forecasting than the model based on the data of 2006-2015 had. Conclusion The establishment and prediction of ARIMA model is a dynamic process, which needs to be adjusted unceasingly according to the accumulated data, and in addition, the major changes of epidemic characteristics of infectious diseases must be considered.

  10. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  11. Predictive modeling of coupled multi-physics systems: II. Illustrative application to reactor physics

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel; Badea, Madalina Corina

    2014-01-01

    Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner

  12. Investigation of a breathing surrogate prediction algorithm for prospective pulmonary gating

    International Nuclear Information System (INIS)

    White, Benjamin M.; Low, Daniel A.; Zhao Tianyu; Wuenschel, Sara; Lu, Wei; Lamb, James M.; Mutic, Sasa; Bradley, Jeffrey D.; El Naqa, Issam

    2011-01-01

    Purpose: A major challenge of four dimensional computed tomography (4DCT) in treatment planning and delivery has been the lack of respiration amplitude and phase reproducibility during image acquisition. The implementation of a prospective gating algorithm would ensure that images would be acquired only during user-specified breathing phases. This study describes the development and testing of an autoregressive moving average (ARMA) model for human respiratory phase prediction under quiet respiration conditions. Methods: A total of 47 4DCT patient datasets and synchronized respiration records was utilized in this study. Three datasets were used in model development and were removed from further evaluation of the ARMA model. The remaining 44 patient datasets were evaluated with the ARMA model for prediction time steps from 50 to 1000 ms in increments of 50 and 100 ms. Thirty-five of these datasets were further used to provide a comparison between the proposed ARMA model and a commercial algorithm with a prediction time step of 240 ms. Results: The optimal number of parameters for the ARMA model was based on three datasets reserved for model development. Prediction error was found to increase as the prediction time step increased. The minimum prediction time step required for prospective gating was selected to be half of the gantry rotation period. The maximum prediction time step with a conservative 95% confidence criterion was found to be 0.3 s. The ARMA model predicted peak inhalation and peak exhalation phases significantly better than the commercial algorithm. Furthermore, the commercial algorithm had numerous instances of missed breath cycles and falsely predicted breath cycles, while the proposed model did not have these errors. Conclusions: An ARMA model has been successfully applied to predict human respiratory phase occurrence. For a typical CT scanner gantry rotation period of 0.4 s (0.2 s prediction time step), the absolute error was relatively small, 0

  13. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  14. Structure-Activity Relationship Models for Rat Carcinogenesis and Assessing the Role Mutagens Play in Model Predictivity

    Science.gov (United States)

    Carrasquer, C. Alex; Batey, Kaylind; Qamar, Shahid; Cunningham, Albert R.; Cunningham, Suzanne L.

    2016-01-01

    We previously demonstrated that fragment based cat-SAR carcinogenesis models consisting solely of mutagenic or non-mutagenic carcinogens varied greatly in terms of their predictive accuracy. This led us to investigate how well the rat cancer cat-SAR model predicted mutagens and non-mutagens in their learning set. Four rat cancer cat-SAR models were developed: Complete Rat, Transgender Rat, Male Rat, and Female Rat, with leave-one-out (LOO) validation concordance values of 69%, 74%, 67%, and 73%, respectively. The mutagenic carcinogens produced concordance values in the range of 69–76% as compared to only 47–53% for non-mutagenic carcinogens. As a surrogate for mutagenicity comparisons between single site and multiple site carcinogen SAR models was analyzed. The LOO concordance values for models consisting of 1-site, 2-site, and 4+-site carcinogens were 66%, 71%, and 79%, respectively. As expected, the proportion of mutagens to non-mutagens also increased, rising from 54% for 1-site to 80% for 4+-site carcinogens. This study demonstrates that mutagenic chemicals, in both SAR learning sets and test sets, are influential in assessing model accuracy. This suggests that SAR models for carcinogens may require a two-step process in which mutagenicity is first determined before carcinogenicity can be accurately predicted. PMID:24697549

  15. Computational Prediction of Excited-State Carbon Tunneling in the Two Steps of Triplet Zimmerman Di-π-Methane Rearrangement.

    Science.gov (United States)

    Li, Xin; Liao, Tao; Chung, Lung Wa

    2017-11-22

    The photoinduced Zimmerman di-π-methane (DPM) rearrangement of polycyclic molecules to form synthetically useful cyclopropane derivatives was found experimentally to proceed in a triplet excited state. We have applied state-of-the-art quantum mechanical methods, including M06-2X, DLPNO-CCSD(T) and variational transition-state theory with multidimensional tunneling corrections, to an investigation of the reaction rates of the two steps in the triplet DPM rearrangement of dibenzobarrelene, benzobarrelene and barrelene. This study predicts a high probability of carbon tunneling in regions around the two consecutive transition states at 200-300 K, and an enhancement in the rates by 104-276/35-67% with carbon tunneling at 200/300 K. The Arrhenius plots of the rate constants were found to be curved at low temperatures. Moreover, the computed 12 C/ 13 C kinetic isotope effects were affected significantly by carbon tunneling and temperature. Our predictions of electronically excited-state carbon tunneling and two consecutive carbon tunneling are unprecedented. Heavy-atom tunneling in some photoinduced reactions with reactive intermediates and narrow barriers can be potentially observed at relatively low temperature in experiments.

  16. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  17. A Predictive Model for Yeast Cell Polarization in Pheromone Gradients.

    Science.gov (United States)

    Muller, Nicolas; Piel, Matthieu; Calvez, Vincent; Voituriez, Raphaël; Gonçalves-Sá, Joana; Guo, Chin-Lin; Jiang, Xingyu; Murray, Andrew; Meunier, Nicolas

    2016-04-01

    Budding yeast cells exist in two mating types, a and α, which use peptide pheromones to communicate with each other during mating. Mating depends on the ability of cells to polarize up pheromone gradients, but cells also respond to spatially uniform fields of pheromone by polarizing along a single axis. We used quantitative measurements of the response of a cells to α-factor to produce a predictive model of yeast polarization towards a pheromone gradient. We found that cells make a sharp transition between budding cycles and mating induced polarization and that they detect pheromone gradients accurately only over a narrow range of pheromone concentrations corresponding to this transition. We fit all the parameters of the mathematical model by using quantitative data on spontaneous polarization in uniform pheromone concentration. Once these parameters have been computed, and without any further fit, our model quantitatively predicts the yeast cell response to pheromone gradient providing an important step toward understanding how cells communicate with each other.

  18. A risk prediction model for xerostomia: a retrospective cohort study.

    Science.gov (United States)

    Villa, Alessandro; Nordio, Francesco; Gohel, Anita

    2016-12-01

    We investigated the prevalence of xerostomia in dental patients and built a xerostomia risk prediction model by incorporating a wide range of risk factors. Socio-demographic data, past medical history, self-reported dry mouth and related symptoms were collected retrospectively from January 2010 to September 2013 for all new dental patients. A logistic regression framework was used to build a risk prediction model for xerostomia. External validation was performed using an independent data set to test the prediction power. A total of 12 682 patients were included in this analysis (54.3%, females). Xerostomia was reported by 12.2% of patients. The proportion of people reporting xerostomia was higher among those who were taking more medications (OR = 1.11, 95% CI = 1.08-1.13) or recreational drug users (OR = 1.4, 95% CI = 1.1-1.9). Rheumatic diseases (OR = 2.17, 95% CI = 1.88-2.51), psychiatric diseases (OR = 2.34, 95% CI = 2.05-2.68), eating disorders (OR = 2.28, 95% CI = 1.55-3.36) and radiotherapy (OR = 2.00, 95% CI = 1.43-2.80) were good predictors of xerostomia. For the test model performance, the ROC-AUC was 0.816 and in the external validation sample, the ROC-AUC was 0.799. The xerostomia risk prediction model had high accuracy and discriminated between high- and low-risk individuals. Clinicians could use this model to identify the classes of medications and systemic diseases associated with xerostomia. © 2015 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  19. Development of the statistical ARIMA model: an application for predicting the upcoming of MJO index

    Science.gov (United States)

    Hermawan, Eddy; Nurani Ruchjana, Budi; Setiawan Abdullah, Atje; Gede Nyoman Mindra Jaya, I.; Berliana Sipayung, Sinta; Rustiana, Shailla

    2017-10-01

    This study is mainly concerned in development one of the most important equatorial atmospheric phenomena that we call as the Madden Julian Oscillation (MJO) which having strong impacts to the extreme rainfall anomalies over the Indonesian Maritime Continent (IMC). In this study, we focused to the big floods over Jakarta and surrounded area that suspecting caused by the impacts of MJO. We concentrated to develop the MJO index using the statistical model that we call as Box-Jenkis (ARIMA) ini 1996, 2002, and 2007, respectively. They are the RMM (Real Multivariate MJO) index as represented by RMM1 and RMM2, respectively. There are some steps to develop that model, starting from identification of data, estimated, determined model, before finally we applied that model for investigation some big floods that occurred at Jakarta in 1996, 2002, and 2007 respectively. We found the best of estimated model for the RMM1 and RMM2 prediction is ARIMA (2,1,2). Detailed steps how that model can be extracted and applying to predict the rainfall anomalies over Jakarta for 3 to 6 months later is discussed at this paper.

  20. A two-phase model of plantar tissue: a step toward prediction of diabetic foot ulceration.

    Science.gov (United States)

    Sciumè, G; Boso, D P; Gray, W G; Cobelli, C; Schrefler, B A

    2014-11-01

    A new computational model, based on the thermodynamically constrained averaging theory, has been recently proposed to predict tumor initiation and proliferation. A similar mathematical approach is proposed here as an aid in diabetic ulcer prevention. The common aspects at the continuum level are the macroscopic balance equations governing the flow of the fluid phase, diffusion of chemical species, tissue mechanics, and some of the constitutive equations. The soft plantar tissue is modeled as a two-phase system: a solid phase consisting of the tissue cells and their extracellular matrix, and a fluid one (interstitial fluid and dissolved chemical species). The solid phase may become necrotic depending on the stress level and on the oxygen availability in the tissue. Actually, in diabetic patients, peripheral vascular disease impacts tissue necrosis; this is considered in the model via the introduction of an effective diffusion coefficient that governs transport of nutrients within the microvasculature. The governing equations of the mathematical model are discretized in space by the finite element method and in time domain using the θ-Wilson Method. While the full mathematical model is developed in this paper, the example is limited to the simulation of several gait cycles of a healthy foot. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Evaluation of candidate geomagnetic field models for IGRF-12

    OpenAIRE

    Erwan Thébault; Christopher C. Finlay; Patrick Alken; Ciaran D. Beggan; Elisabeth Canet; Arnaud Chulliat; Benoit Langlais; V. Lesur; Frank J. Lowes; Chandrasekharan Manoj; Martin Rother; Reyko Schachtschneider

    2015-01-01

    Background: The 12th revision of the International Geomagnetic Reference Field (IGRF) was issued in December 2014 by the International Association of Geomagnetism and Aeronomy (IAGA) Division V Working Group V-MOD (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html). This revision comprises new spherical harmonic main field models for epochs 2010.0 (DGRF-2010) and 2015.0 (IGRF-2015) and predictive linear secular variation for the interval 2015.0-2020.0 (SV-2010-2015). Findings: The models were deri...

  2. Stabilizing model predictive control of a gantry crane based on flexible set-membership constraints

    NARCIS (Netherlands)

    Iles, Sandor; Lazar, M.; Kolonic, Fetah; Jadranko, Matusko

    2015-01-01

    This paper presents a stabilizing distributed model predictive control of a gantry crane taking into account the variation of cable length. The proposed algorithm is based on the off-line computation of a sequence of 1-step controllable sets and a condition that enables flexible convergence towards

  3. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc

    International Nuclear Information System (INIS)

    Banner, D.

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs

  4. Alcoholics anonymous, other 12-step movements and psychotherapy in the US population, 1990.

    Science.gov (United States)

    Room, R; Greenfield, T

    1993-04-01

    Based on the 1990 US National Alcohol Survey, this note provides the first available comprehensive findings on self-reported utilization of a variety of sources of personal support and counselling for alcohol and other problems. Respondents were queried about lifetime attendance and number of times they went to identified sources of help in the prior year. Twelve-step groups included Alcoholics Anonymous, Al-Anon, Adult Children of Alcoholics, and other non-alcohol-oriented groups like Gamblers Anonymous, Narcotics Anonymous, and Overeaters Anonymous; additional questions inquired about support or therapy groups and individual counselling for non-alcohol problems. Of the US adult population, 9% have been to an AA meeting at some time, 3.6% in the prior year, only about one-third of these for problems of their own. About half these percentages, mostly women, have attended Al-Anon. Of the same population, 13.3% indicate ever attending a 12-step meeting (including non-alcohol-oriented groups), 5.3% in the last year. During the prior year a further 2.1% used other support/therapy groups and 5.5% sought individual counselling/therapy for personal problems other than alcohol. In contrast to this high reported utilization, only 4.9% (ever) and 2.3% (12-months) reported going to anyone including AA for a problem (of their own) related to drinking.

  5. Specification of a STEP Based Reference Model for Exchange of Robotics Models

    DEFF Research Database (Denmark)

    Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold

    robot programming, the descriptions of geometry, kinematics, robotics, dynamics, and controller data using STEP are addressed as major goals of the project.The Project Consortium has now released the "Specificatin of a STEP Based Reference Model for Exchange of Robotics Models" on which a series......ESPRIT Project 6457: "Interoperability of Standards for Robotics in CIME" (InterRob) belongs to the Subprogram "Computer Integrated Manufacturing and Engineering" of ESPRIT, the European Specific Programme for Research and Development in Information Technology supported by the European Commision....... InterRob aims to develop an integrated solution to precision manufacturing by combining product data and database technologies with robotic off-line programming and simulation. Benefits arise from the use of high level simulation tools and developing standards for the exchange of product model data...

  6. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  7. Predictive Treatment Management: Incorporating a Predictive Tumor Response Model Into Robust Prospective Treatment Planning for Non-Small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Pengpeng, E-mail: zhangp@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Yorke, Ellen; Hu, Yu-Chi; Mageras, Gig [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States)

    2014-02-01

    Purpose: We hypothesized that a treatment planning technique that incorporates predicted lung tumor regression into optimization, predictive treatment planning (PTP), could allow dose escalation to the residual tumor while maintaining coverage of the initial target without increasing dose to surrounding organs at risk (OARs). Methods and Materials: We created a model to estimate the geometric presence of residual tumors after radiation therapy using planning computed tomography (CT) and weekly cone beam CT scans of 5 lung cancer patients. For planning purposes, we modeled the dynamic process of tumor shrinkage by morphing the original planning target volume (PTV{sub orig}) in 3 equispaced steps to the predicted residue (PTV{sub pred}). Patients were treated with a uniform prescription dose to PTV{sub orig}. By contrast, PTP optimization started with the same prescription dose to PTV{sub orig} but linearly increased the dose at each step, until reaching the highest dose achievable to PTV{sub pred} consistent with OAR limits. This method is compared with midcourse adaptive replanning. Results: Initial parenchymal gross tumor volume (GTV) ranged from 3.6 to 186.5 cm{sup 3}. On average, the primary GTV and PTV decreased by 39% and 27%, respectively, at the end of treatment. The PTP approach gave PTV{sub orig} at least the prescription dose, and it increased the mean dose of the true residual tumor by an average of 6.0 Gy above the adaptive approach. Conclusions: PTP, incorporating a tumor regression model from the start, represents a new approach to increase tumor dose without increasing toxicities, and reduce clinical workload compared with the adaptive approach, although model verification using per-patient midcourse imaging would be prudent.

  8. [Study on the ARIMA model application to predict echinococcosis cases in China].

    Science.gov (United States)

    En-Li, Tan; Zheng-Feng, Wang; Wen-Ce, Zhou; Shi-Zhu, Li; Yan, Lu; Lin, Ai; Yu-Chun, Cai; Xue-Jiao, Teng; Shun-Xian, Zhang; Zhi-Sheng, Dang; Chun-Li, Yang; Jia-Xu, Chen; Wei, Hu; Xiao-Nong, Zhou; Li-Guang, Tian

    2018-02-26

    To predict the monthly reported echinococcosis cases in China with the autoregressive integrated moving average (ARIMA) model, so as to provide a reference for prevention and control of echinococcosis. SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported echinococcosis cases of time series from 2007 to 2015 and 2007 to 2014, respectively, and the accuracies of the two ARIMA models were compared. The model based on the data of the monthly reported cases of echinococcosis in China from 2007 to 2015 was ARIMA (1, 0, 0) (1, 1, 0) 12 , the relative error among reported cases and predicted cases was -13.97%, AR (1) = 0.367 ( t = 3.816, P ARIMA (1, 0, 0) (1, 0, 1) 12 , the relative error among reported cases and predicted cases was 0.56%, AR (1) = 0.413 ( t = 4.244, P ARIMA models as for the same infectious diseases. It is needed to be further verified that the more data are accumulated, the shorter time of predication is, and the smaller the average of the relative error is. The establishment and prediction of an ARIMA model is a dynamic process that needs to be adjusted and optimized continuously according to the accumulated data, meantime, we should give full consideration to the intensity of the work related to infectious diseases reported (such as disease census and special investigation).

  9. Risk score prediction model for dementia in patients with type 2 diabetes.

    Science.gov (United States)

    Li, Chia-Ing; Li, Tsai-Chung; Liu, Chiu-Shong; Liao, Li-Na; Lin, Wen-Yuan; Lin, Chih-Hsueh; Yang, Sing-Yu; Chiang, Jen-Huai; Lin, Cheng-Chieh

    2018-03-30

    No study established a prediction dementia model in the Asian populations. This study aims to develop a prediction model for dementia in Chinese type 2 diabetes patients. This retrospective cohort study included 27,540 Chinese type 2 diabetes patients (aged 50-94 years) enrolled in Taiwan National Diabetes Care Management Program. Participants were randomly allocated into derivation and validation sets at 2:1 ratio. Cox proportional hazards regression models were used to identify risk factors for dementia in the derivation set. Steps proposed by Framingham Heart Study were used to establish a prediction model with a scoring system. The average follow-up was 8.09 years, with a total of 853 incident dementia cases in derivation set. Dementia risk score summed up the individual scores (from 0 to 20). The areas under curve of 3-, 5-, and 10-year dementia risks were 0.82, 0.79, and 0.76 in derivation set and 0.84, 0.80, and 0.75 in validation set, respectively. The proposed score system is the first dementia risk prediction model for Chinese type 2 diabetes patients in Taiwan. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  11. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View.

    Science.gov (United States)

    Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael

    2016-12-16

    As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.

  12. Step-indexed Kripke models over recursive worlds

    DEFF Research Database (Denmark)

    Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan

    2011-01-01

    worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...

  13. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    Science.gov (United States)

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  14. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  15. A model predictive control approach combined unscented Kalman filter vehicle state estimation in intelligent vehicle trajectory tracking

    Directory of Open Access Journals (Sweden)

    Hongxiao Yu

    2015-05-01

    Full Text Available Trajectory tracking and state estimation are significant in the motion planning and intelligent vehicle control. This article focuses on the model predictive control approach for the trajectory tracking of the intelligent vehicles and state estimation of the nonlinear vehicle system. The constraints of the system states are considered when applying the model predictive control method to the practical problem, while 4-degree-of-freedom vehicle model and unscented Kalman filter are proposed to estimate the vehicle states. The estimated states of the vehicle are used to provide model predictive control with real-time control and judge vehicle stability. Furthermore, in order to decrease the cost of solving the nonlinear optimization, the linear time-varying model predictive control is used at each time step. The effectiveness of the proposed vehicle state estimation and model predictive control method is tested by driving simulator. The results of simulations and experiments show that great and robust performance is achieved for trajectory tracking and state estimation in different scenarios.

  16. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  17. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  18. A computational model to predict rat ovarian steroid secretion from in vitro experiments with endocrine disruptors.

    Directory of Open Access Journals (Sweden)

    Nadia Quignot

    Full Text Available A finely tuned balance between estrogens and androgens controls reproductive functions, and the last step of steroidogenesis plays a key role in maintaining that balance. Environmental toxicants are a serious health concern, and numerous studies have been devoted to studying the effects of endocrine disrupting chemicals (EDCs. The effects of EDCs on steroidogenic enzymes may influence steroid secretion and thus lead to reproductive toxicity. To predict hormonal balance disruption on the basis of data on aromatase activity and mRNA level modulation obtained in vitro on granulosa cells, we developed a mathematical model for the last gonadal steps of the sex steroid synthesis pathway. The model can simulate the ovarian synthesis and secretion of estrone, estradiol, androstenedione, and testosterone, and their response to endocrine disruption. The model is able to predict ovarian sex steroid concentrations under normal estrous cycle in female rat, and ovarian estradiol concentrations in adult female rats exposed to atrazine, bisphenol A, metabolites of methoxychlor or vinclozolin, and letrozole.

  19. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  20. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model

  2. Theoretical prediction of morphotropic compositions in Na1/2Bi1/2TiO3-based solid solutions from transition pressures

    Science.gov (United States)

    Gröting, Melanie; Albe, Karsten

    2014-02-01

    In this article we present a method based on ab initio calculations to predict compositions at morphotropic phase boundaries in lead-free perovskite solid solutions. This method utilizes the concept of flat free energy surfaces and involves the monitoring of pressure-induced phase transitions as a function of composition. As model systems, solid solutions of Na1/2Bi1/2TiO3 with the alkali substituted Li1/2Bi1/2TiO3 and K1/2Bi1/2TiO3 and the alkaline earth substituted CaTiO3 and BaTiO3 are chosen. The morphotropic compositions are identified by determining the composition at which the phase transition pressure equals zero. In addition, we discuss the different effects of hydrostatic pressure (compression and tension) and chemical substitution on the antiphase tilts about the [111] axis (a-a-a-) present in pure Na1/2Bi1/2TiO3 and how they develop in the two solid solutions Na1/2Bi1/2TiO3-CaTiO3 and Na1/2Bi1/2TiO3-BaTiO3. Finally, we discuss the advantages and shortcomings of this simple computational approach.

  3. A probabilistic model to predict clinical phenotypic traits from genome sequencing.

    Science.gov (United States)

    Chen, Yun-Ching; Douville, Christopher; Wang, Cheng; Niknafs, Noushin; Yeo, Grace; Beleva-Guthrie, Violeta; Carter, Hannah; Stenson, Peter D; Cooper, David N; Li, Biao; Mooney, Sean; Karchin, Rachel

    2014-09-01

    Genetic screening is becoming possible on an unprecedented scale. However, its utility remains controversial. Although most variant genotypes cannot be easily interpreted, many individuals nevertheless attempt to interpret their genetic information. Initiatives such as the Personal Genome Project (PGP) and Illumina's Understand Your Genome are sequencing thousands of adults, collecting phenotypic information and developing computational pipelines to identify the most important variant genotypes harbored by each individual. These pipelines consider database and allele frequency annotations and bioinformatics classifications. We propose that the next step will be to integrate these different sources of information to estimate the probability that a given individual has specific phenotypes of clinical interest. To this end, we have designed a Bayesian probabilistic model to predict the probability of dichotomous phenotypes. When applied to a cohort from PGP, predictions of Gilbert syndrome, Graves' disease, non-Hodgkin lymphoma, and various blood groups were accurate, as individuals manifesting the phenotype in question exhibited the highest, or among the highest, predicted probabilities. Thirty-eight PGP phenotypes (26%) were predicted with area-under-the-ROC curve (AUC)>0.7, and 23 (15.8%) of these were statistically significant, based on permutation tests. Moreover, in a Critical Assessment of Genome Interpretation (CAGI) blinded prediction experiment, the models were used to match 77 PGP genomes to phenotypic profiles, generating the most accurate prediction of 16 submissions, according to an independent assessor. Although the models are currently insufficiently accurate for diagnostic utility, we expect their performance to improve with growth of publicly available genomics data and model refinement by domain experts.

  4. Predictive modelling of complex agronomic and biological systems.

    Science.gov (United States)

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.

  5. Models for microtubule cargo transport coupling the Langevin equation to stochastic stepping motor dynamics: Caring about fluctuations.

    Science.gov (United States)

    Bouzat, Sebastián

    2016-01-01

    One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.

  6. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  7. Protein structure modeling and refinement by global optimization in CASP12.

    Science.gov (United States)

    Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2018-03-01

    For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.

  8. Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling

    Science.gov (United States)

    Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean

    2018-01-01

    Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM

  9. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  10. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    Science.gov (United States)

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  11. Unitarization of Koerner-Kuroda model of electromagnetic structure of octet 1/2+ baryons

    International Nuclear Information System (INIS)

    Dubnicka, S.; Dubnickova, A.Z.

    1994-10-01

    The Koerner-Kuroda model of the electromagnetic structure of octet 1/2 + baryons is restored on a more topical physical basis. Electromagnetic radii of baryons under consideration are calculated and compared with other model predictions. By an incorporation of a two-cut approximation of correct form factor analytic properties and nonzero vector-meson widths, the Koerner-Kuroda model is unitarized, providing in this manner imaginary parts of the octet 1/2 + baryon form factors to be nonzero just starting from a branch point corresponding to the lowest threshold. (author). 32 refs, 16 figs, 2 tabs

  12. Low-lying 1/2-hidden strange pentaquark states in the constituent quark model

    Institute of Scientific and Technical Information of China (English)

    Hui Li; Zong-Xiu Wu; Chun-Sheng An; Hong Chen

    2017-01-01

    We investigate the spectrum of the low-lying 1/2-hidden strange pentaquark states,employing the constituent quark model,and looking at two ways within that model of mediating the hyperfine interaction between quarks-Goldstone boson exchange and one gluon exchange.Numerical results show that the lowest 1/2-hidden strange pentaquark state in the Goldstone boson exchange model lies at ~ 1570 MeV,so this pentaquark configuration may form a notable component in S11(1535) if the Goldstone boson exchange model is applied.This is consistent with the prediction that S11 (1535) couples very strongly to strangeness channels.

  13. Model Predictive Control of a Wave Energy Converter with Discrete Fluid Power Power Take-Off System

    Directory of Open Access Journals (Sweden)

    Anders Hedegaard Hansen

    2018-03-01

    Full Text Available Wave power extraction algorithms for wave energy converters are normally designed without taking system losses into account leading to suboptimal power extraction. In the current work, a model predictive power extraction algorithm is designed for a discretized power take of system. It is shown how the quantized nature of a discrete fluid power system may be included in a new model predictive control algorithm leading to a significant increase in the harvested power. A detailed investigation of the influence of the prediction horizon and the time step is reported. Furthermore, it is shown how the inclusion of a loss model may increase the energy output. Based on the presented results it is concluded that power extraction algorithms based on model predictive control principles are both feasible and favorable for use in a discrete fluid power power take-off system for point absorber wave energy converters.

  14. Development of Building Thermal Load and Discomfort Degree Hour Prediction Models Using Data Mining Approaches

    Directory of Open Access Journals (Sweden)

    Yaolin Lin

    2018-06-01

    Full Text Available Thermal load and indoor comfort level are two important building performance indicators, rapid predictions of which can help significantly reduce the computation time during design optimization. In this paper, a three-step approach is used to develop and evaluate prediction models. Firstly, the Latin Hypercube Sampling Method (LHSM is used to generate a representative 19-dimensional design database and DesignBuilder is then used to obtain the thermal load and discomfort degree hours through simulation. Secondly, samples from the database are used to develop and validate seven prediction models, using data mining approaches including multilinear regression (MLR, chi-square automatic interaction detector (CHAID, exhaustive CHAID (ECHAID, back-propagation neural network (BPNN, radial basis function network (RBFN, classification and regression trees (CART, and support vector machines (SVM. It is found that the MLR and BPNN models outperform the others in the prediction of thermal load with average absolute error of less than 1.19%, and the BPNN model is the best at predicting discomfort degree hour with 0.62% average absolute error. Finally, two hybrid models—MLR (MLR + BPNN and MLR-BPNN—are developed. The MLR-BPNN models are found to be the best prediction models, with average absolute error of 0.82% in thermal load and 0.59% in discomfort degree hour.

  15. 12C-12C total and reaction cross-section between 6 and 85 MeV/A from an optical model analysis

    International Nuclear Information System (INIS)

    Brandan, M.E.

    1982-07-01

    Values of σsub(R) and σsub(T) are obtained from an optical model analysis of 12 C- 12 C elastic scattering data between 6 and 85 MeV/A. They confirm the general trends predicted by DeVries and collaborators but show discrepancies at the region of the maxima. The o.m. analysis indicates a significant decrease of the real potential strength with energy

  16. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  17. A short-range multi-model ensemble weather prediction system for South Africa

    CSIR Research Space (South Africa)

    Landman, S

    2010-09-01

    Full Text Available prediction system (EPS) at the South African Weather Service (SAWS) are examined. The ensemble consists of different forecasts from the 12-km LAM of the UK Met Office Unified Model (UM) and the Conformal-Cubic Atmospheric Model (CCAM) covering the South...

  18. First steps towards modelling high burnup effect in UO{sub 2} fuel

    Energy Technology Data Exchange (ETDEWEB)

    O` Carroll, C; Lassmann, K; Laar, J Van De; Walker, C T [CEC Joint Research Centre, Karlsruhe (Germany)

    1997-08-01

    High burnup initiates a process that can lead to major microstructural changes near the edge of the fuel: formation of subgrains, the loss of matrix fission gas and an increase in porosity. A consequence of this, is a decrease of thermal conductivity near the edge of the fuel which may be major implications for the performance of LWR fuels at higher burnup. The mechanism for the changes in grain structure, the apparent depletion of Xe and increase in porosity is associated with the high fission density at the fuel periphery. This is in turn due to the preferential capture of epithermal neutrons in the resonances of {sup 238}U. The new model TUBRNP predicts the radial burnup profile as a function of time together with the radial profile of plutonium. The model has been validated with data from LWR UO{sub 2} fuels with enrichments in the range 2 to 8.25% and burnups between 21 to 75 Gwd/t. It has been reported that at high burnup EPMA measures a sharp decrease in the concentration of Xe near the fuel surface. This loss of Xe is interpreted as a signal that the gas has been swept out of the original grains into pores: this ``missing`` Xe has been measured by XRF. It has been noted experimentally that the restructuring (Xe depletion and changes in grain structure) have an onset threshold local burnup in the region of 70 to 80 GWd/t: a specific value was taken for use in the model. For a given fuel TUBRNP predicts the local burnup profile, and the depth corresponding to the threshold value is taken to be the thickness of the Xe depleted region. The theoretical predictions have been compared with experimental data. The results are presented and should be seen as a first step in the development of a more detailed model of this phenomenon. (author). 22 refs, 9 figs, 2 tabs.

  19. HESS Opinions: Hydrologic predictions in a changing environment: behavioral modeling

    Directory of Open Access Journals (Sweden)

    S. J. Schymanski

    2011-02-01

    Full Text Available Most hydrological models are valid at most only in a few places and cannot be reasonably transferred to other places or to far distant time periods. Transfer in space is difficult because the models are conditioned on past observations at particular places to define parameter values and unobservable processes that are needed to fully characterize the structure and functioning of the landscape. Transfer in time has to deal with the likely temporal changes to both parameters and processes under future changed conditions. This remains an important obstacle to addressing some of the most urgent prediction questions in hydrology, such as prediction in ungauged basins and prediction under global change. In this paper, we propose a new approach to catchment hydrological modeling, based on universal principles that do not change in time and that remain valid across many places. The key to this framework, which we call behavioral modeling, is to assume that there are universal and time-invariant organizing principles that can be used to identify the most appropriate model structure (including parameter values and responses for a given ecosystem at a given moment in time. These organizing principles may be derived from fundamental physical or biological laws, or from empirical laws that have been demonstrated to be time-invariant and to hold at many places and scales. Much fundamental research remains to be undertaken to help discover these organizing principles on the basis of exploration of observed patterns of landscape structure and hydrological behavior and their interpretation as legacy effects of past co-evolution of climate, soils, topography, vegetation and humans. Our hope is that the new behavioral modeling framework will be a step forward towards a new vision for hydrology where models are capable of more confidently predicting the behavior of catchments beyond what has been observed or experienced before.

  20. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  1. Porosity Prediction of Plain Weft Knitted Fabrics

    Directory of Open Access Journals (Sweden)

    Muhammad Owais Raza Siddiqui

    2014-12-01

    Full Text Available Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.

  2. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  3. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Cleaning up a salt spill : predictive modelling and monitoring natural attenuation to save remedial costs

    Energy Technology Data Exchange (ETDEWEB)

    Tsang, B.; Shaikh, A.A. [EBA Engineering Consultants Ltd., Calgary, AB (Canada)

    2006-07-01

    Predictive modelling and monitoring natural attenuation to save remedial costs in cleaning up a salt spill were discussed with reference to a site located in central Alberta, as well as a pipeline break in 2002 from a corroded pipe which resulted in a large spill of produced water and oil. Remedial alternatives and an assessment of the site were presented. This included an electromagnetic survey in 2004, groundwater flow regime, soil and groundwater quality data, vegetation survey, and predictive modelling versus observed water quality. Photos and illustrations of the site from the air were provided. A conceptual salt leaching and transport model was proposed as a solution. Model calculation results were also presented. Last, the presentation discussed some important considerations for predictive modeling and next steps for the site. These included continued monitoring, implementation of a restoration plan and engagement of stakeholders such as Alberta Environment and the site landowner. tabs., figs.

  5. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  6. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse; De Donato, Renato; Lensink, Marc F.; Petta, Andrea; Serra, Luigi; Scarano, Vittorio; Cavallo, Luigi; Oliva, Romina

    2016-01-01

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  7. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  8. Modeling Seizure Self-Prediction: An E-Diary Study

    Science.gov (United States)

    Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.

    2013-01-01

    Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898

  9. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  10. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    International Nuclear Information System (INIS)

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-01-01

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme

  11. Modeling and control design of a stand alone wind energy conversion system based on functional model predictive control

    Energy Technology Data Exchange (ETDEWEB)

    Kassem, Ahmed M. [Beni-Suef University, Electrical Dept., Beni Suef (Egypt)

    2012-09-15

    This paper investigates the application of the model predictive control (MPC) approach to control the voltage and frequency of a stand alone wind generation system. This scheme consists of a wind turbine which drives an induction generator feeding an isolated load. A static VAR compensator is connected at the induction generator terminals to regulate the load voltage. The rotor speed, and thereby the load frequency are controlled via adjusting the mechanical power input using the blade pitch-angle. The MPC is used to calculate the optimal control actions including system constraints. To alleviate computational effort and to reduce numerical problems, particularly in large prediction horizon, an exponentially weighted functional model predictive control (FMPC) is employed. Digital simulations have been carried out in order to validate the effectiveness of the proposed scheme. The proposed controller has been tested through step changes in the wind speed and the load impedance. Simulation results show that adequate performance of the proposed wind energy scheme has been achieved. Moreover, this scheme is robust against the parameters variation and eliminates the influence of modeling and measurement errors. (orig.)

  12. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  13. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  14. Double-step processes in the 12C(p,d)11C reaction at 45 MeV

    International Nuclear Information System (INIS)

    Couvert, Pierre.

    1974-01-01

    12 C(p,d) 11 C pick-up reaction was performed with a 45 MeV proton beam. A 130keV energy resolution was obtained and angular distributions of nine of the ten first levels of 11 C have been extracted within a large angular range. Assuming only neutron direct transfert, the strong relative excitation of high spin levels cannot be reproduced by a DWBA analysis. The double-step process assumption seems to be verified by a systematical analysis of the (p,d) reaction mechanisms. This analysis is done in the coupled-channel formalism for the five first negative parity states of 11 C. The 3/2 - ground state is essentially populated by the direct transfer of a Psub(3/2) neutron. The contribution of a double-step process, via the 2 + inelastic excitation of 12 C, is important for the four other states. A mechanism which assumes a deuteron inelastic scattering on the 11 C final nucleus after the neutron transfer cannot be neglected and improves the fits when it is taken into account [fr

  15. A regional neural network model for predicting mean daily river water temperature

    Science.gov (United States)

    Wagner, Tyler; DeWeber, Jefferson Tyrell

    2014-01-01

    Water temperature is a fundamental property of river habitat and often a key aspect of river resource management, but measurements to characterize thermal regimes are not available for most streams and rivers. As such, we developed an artificial neural network (ANN) ensemble model to predict mean daily water temperature in 197,402 individual stream reaches during the warm season (May–October) throughout the native range of brook trout Salvelinus fontinalis in the eastern U.S. We compared four models with different groups of predictors to determine how well water temperature could be predicted by climatic, landform, and land cover attributes, and used the median prediction from an ensemble of 100 ANNs as our final prediction for each model. The final model included air temperature, landform attributes and forested land cover and predicted mean daily water temperatures with moderate accuracy as determined by root mean squared error (RMSE) at 886 training sites with data from 1980 to 2009 (RMSE = 1.91 °C). Based on validation at 96 sites (RMSE = 1.82) and separately for data from 2010 (RMSE = 1.93), a year with relatively warmer conditions, the model was able to generalize to new stream reaches and years. The most important predictors were mean daily air temperature, prior 7 day mean air temperature, and network catchment area according to sensitivity analyses. Forest land cover at both riparian and catchment extents had relatively weak but clear negative effects. Predicted daily water temperature averaged for the month of July matched expected spatial trends with cooler temperatures in headwaters and at higher elevations and latitudes. Our ANN ensemble is unique in predicting daily temperatures throughout a large region, while other regional efforts have predicted at relatively coarse time steps. The model may prove a useful tool for predicting water temperatures in sampled and unsampled rivers under current conditions and future projections of climate

  16. Validation of water sorption-based clay prediction models for calcareous soils

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Moosavi, Ali

    2017-01-01

    on prediction accuracy. The soils had clay content ranging from 9 to 61% and CaCO3 from 24 to 97%. The three water sorption models considered showed a reasonably fair prediction of the clay content from water sorption at 28% relative humidity (RMSE and ME values ranging from 10.6 to 12.1 and −8.1 to −4......Soil particle size distribution (PSD), particularly the active clay fraction, mediates soil engineering, agronomic and environmental functions. The tedious and costly nature of traditional methods of determining PSD prompted the development of water sorption-based models for determining the clay...... fraction. The applicability of such models to semi-arid soils with significant amounts of calcium carbonate and/or gypsum is unknown. The objective of this study was to validate three water sorption-based clay prediction models for 30 calcareous soils from Iran and identify the effect of CaCO3...

  17. Breast cancer risks and risk prediction models.

    Science.gov (United States)

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  18. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  19. Leading Change Step-by-Step: Tactics, Tools, and Tales

    Science.gov (United States)

    Spiro, Jody

    2010-01-01

    "Leading Change Step-by-Step" offers a comprehensive and tactical guide for change leaders. Spiro's approach has been field-tested for more than a decade and proven effective in a wide variety of public sector organizations including K-12 schools, universities, international agencies and non-profits. The book is filled with proven tactics for…

  20. Lower hybrid current drive: an overview of simulation models, benchmarking with experiment, and predictions for future devices

    International Nuclear Information System (INIS)

    Bonoli, P.T.; Barbato, E.; Imbeaux, F.

    2003-01-01

    This paper reviews the status of lower hybrid current drive (LHCD) simulation and modeling. We first discuss modules used for wave propagation, absorption, and current drive with particular emphasis placed on comparing exact numerical solutions of the Fokker Planck equation in 2-dimension with solution methods that employ 1-dimensional and adjoint approaches. We also survey model predictions for LHCD in past and present experiments showing detailed comparisons between simulated and observed current drive efficiencies and hard X-ray profiles. Finally we discuss several model predictions for lower hybrid current profile control in proposed next step reactor options. (authors)

  1. The clinical features of alcohol use disorders in biological and step-fathers that predict risk for alcohol use disorders in offspring.

    Science.gov (United States)

    Kendler, Kenneth S; Ohlsson, Henrik; Edwards, Alexis; Sundquist, Jan; Sundquist, Kristina

    2017-12-01

    Given that Alcohol Use Disorder (AUD) is clinically heterogeneous, can we, in a large epidemiological sample using public registries, identify clinical features of AUD cases in biological and step-fathers that index, respectively, genetic and familial-environmental risk for AUD in their offspring? From all father-offspring pairs where the father had AUD and the offspring was born 1960-1990, we identified not-lived-with (NLW) biological fathers (n = 38,376) and step-father pairs (n = 9,711). The relationship between clinical and historical features of the father's AUD and risk for AUD in offspring was assessed by linear hazard regression. Age at first registration for AUD and recurrence of AUD registration were significantly stronger predictors of risk for AUD in the offspring of NLW fathers than in step-fathers. By contrast, number of AUD registrations in NLW fathers and step-fathers were equally predictive of risk for AUD in offspring. However, while the number of step-father AUD registrations that occurred when he was living them with significantly predicted risk for AUD in his step-children, the number of registrations that occurred when not residing with his step-children was unassociated with their AUD risk. In an epidemiological sample, we could meaningfully differentiate between features of AUD in fathers that indexed genetic risk which was transmitted to biological offspring (early age at onset and recurrence) versus indexed environmental risk (registrations while rearing) which increased risk in step-children. © 2017 Wiley Periodicals, Inc.

  2. The Brand's PREACH Model: Predicting Readiness to Engage African American Churches in Health.

    Science.gov (United States)

    Brand, Dorine J; Alston, Reginald J

    2017-09-01

    Despite many attempts to reduce health disparities, health professionals face obstacles in improving poor health outcomes within the African American (AA) community. To promote change for improved health measures, it is important to implement culturally tailored programming through a trusted institution, such as the AA church. While churches have the potential to play an important role in positively impacting health among AAs, it is unclear what attributes are necessary to predict success or failure for health promotion within these institutions. The purpose of this study was to create a model, the Brand's PREACH ( Predicting Readiness to Engage African American Churches in Health) Model, to predict the readiness of AA churches to engage in health promotion programming. Thirty-six semistructured key informant interviews were conducted with 12 pastors, 12 health leaders, and 12 congregants to gain information on the relationship between church infrastructure (physical structure, personnel, funding, and social/cultural support), readiness, and health promotion programming. The findings revealed that church infrastructure has an association with and will predict the readiness of a church to engage in health promotion programming. The ability to identify readiness early on will be useful for developing, implementing, and evaluating faith-based interventions, in partnership with churches, which is a key factor for sustainable and effective programs.

  3. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  4. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  5. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    Science.gov (United States)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  6. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  7. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    Science.gov (United States)

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  8. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  9. Critical behavior of the quantum spin- {1}/{2} anisotropic Heisenberg model

    Science.gov (United States)

    Sousa, J. Ricardo de

    A two-step renormalization group approach - a decimation followed by an effective field renormalization group (EFRG) - is proposed in this work to study the critical behavior of the quantum spin- {1}/{2} anisotropic Heisenberg model. The new method is illustrated by employing approximations in which clusters with one, two and three spins are used. The values of the critical parameter and critical exponent, in two- and three-dimensional lattices, for the Ising and isotropic Heisenberg limits are calculated and compared with other renormalization group approaches and exact (or series) results.

  10. Model for the prediction of subsurface strata movement due to underground mining

    Science.gov (United States)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the

  11. Genetic demixing and evolution in linear stepping stone models

    Science.gov (United States)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  12. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Chengdong Li

    2018-01-01

    Full Text Available To enhance the prediction performance for building energy consumption, this paper presents a modified deep belief network (DBN based hybrid model. The proposed hybrid model combines the outputs from the DBN model with the energy-consuming pattern to yield the final prediction results. The energy-consuming pattern in this study represents the periodicity property of building energy consumption and can be extracted from the observed historical energy consumption data. The residual data generated by removing the energy-consuming pattern from the original data are utilized to train the modified DBN model. The training of the modified DBN includes two steps, the first one of which adopts the contrastive divergence (CD algorithm to optimize the hidden parameters in a pre-train way, while the second one determines the output weighting vector by the least squares method. The proposed hybrid model is applied to two kinds of building energy consumption data sets that have different energy-consuming patterns (daily-periodicity and weekly-periodicity. In order to examine the advantages of the proposed model, four popular artificial intelligence methods—the backward propagation neural network (BPNN, the generalized radial basis function neural network (GRBFNN, the extreme learning machine (ELM, and the support vector regressor (SVR are chosen as the comparative approaches. Experimental results demonstrate that the proposed DBN based hybrid model has the best performance compared with the comparative techniques. Another thing to be mentioned is that all the predictors constructed by utilizing the energy-consuming patterns perform better than those designed only by the original data. This verifies the usefulness of the incorporation of the energy-consuming patterns. The proposed approach can also be extended and applied to some other similar prediction problems that have periodicity patterns, e.g., the traffic flow forecasting and the electricity consumption

  13. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  14. Automotive exhaust gas conversion: from elementary step kinetics to prediction of emission dynamics

    NARCIS (Netherlands)

    Hoebink, J.H.B.J.; Harmsen, J.M.A.; Balenovic, M.; Backx, A.C.P.M.; Schouten, J.C.

    2001-01-01

    Elementary step based kinetics show a high added value to describe the performance of catalytic exhaust gas converters under dynamic conditions, as demonstrated with a Euro test cycle. Combination of such kinetic models for individual global reactions covers the mutual interactions via common

  15. Improvement of NO and CO predictions for a homogeneous combustion SI engine using a novel emissions model

    International Nuclear Information System (INIS)

    Karvountzis-Kontakiotis, Apostolos; Ntziachristos, Leonidas

    2016-01-01

    Highlights: • Presentation of a novel emissions model to predict pollutants formation in engines. • Model based on detailed chemistry, requires no application-specific calibration. • Combined with 0D and 1D combustion models with low additional computational cost. • Demonstrates accurate prediction of cyclic variability of pollutants emissions. - Abstract: This study proposes a novel emissions model for the prediction of spark ignition (SI) engine emissions at homogeneous combustion conditions, using post combustion analysis and a detailed chemistry mechanism. The novel emissions model considers an unburned and a burned zone, where the latter is considered as a homogeneous reactor and is modeled using a detailed chemical kinetics mechanism. This allows detailed emission predictions at high speed practically based only on combustion pressure and temperature profiles, without the need for calibration of the model parameters. The predictability of the emissions model is compared against the extended Zeldovich mechanism for NO and a simplified two-step reaction kinetic model for CO, which both constitute the most widespread existing approaches in the literature. Under various engine load and speed conditions examined, the mean error in NO prediction was 28% for the existing models and less than 1.3% for the new model proposed. The novel emissions model was also used to predict emissions variation due to cyclic combustion variability and demonstrated mean prediction error of 6% and 3.6% for NO and CO respectively, compared to 36% (NO) and 67% (CO) for the simplified model. The results show that the emissions model proposed offers substantial improvements in the prediction of the results without significant increase in calculation time.

  16. Explicit model predictive control applications in power systems: an AGC study for an isolated industrial system

    DEFF Research Database (Denmark)

    Jiang, Hao; Lin, Jin; Song, Yonghua

    2016-01-01

    Model predictive control (MPC), that can consider system constraints, is one of the most advanced control technology used nowadays. In power systems, MPC is applied in a way that an optimal control sequence is given every step by an online MPC controller. The main drawback is that the control law...

  17. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  18. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  19. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  20. Performance of a Predictive Model for Calculating Ascent Time to a Target Temperature

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2016-12-01

    Full Text Available The aim of this study was to develop an artificial neural network (ANN prediction model for controlling building heating systems. This model was used to calculate the ascent time of indoor temperature from the setback period (when a building was not occupied to a target setpoint temperature (when a building was occupied. The calculated ascent time was applied to determine the proper moment to start increasing the temperature from the setback temperature to reach the target temperature at an appropriate time. Three major steps were conducted: (1 model development; (2 model optimization; and (3 performance evaluation. Two software programs—Matrix Laboratory (MATLAB and Transient Systems Simulation (TRNSYS—were used for model development, performance tests, and numerical simulation methods. Correlation analysis between input variables and the output variable of the ANN model revealed that two input variables (current indoor air temperature and temperature difference from the target setpoint temperature, presented relatively strong relationships with the ascent time to the target setpoint temperature. These two variables were used as input neurons. Analyzing the difference between the simulated and predicted values from the ANN model provided the optimal number of hidden neurons (9, hidden layers (3, moment (0.9, and learning rate (0.9. At the study’s conclusion, the optimized model proved its prediction accuracy with acceptable errors.

  1. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  2. Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach

    Directory of Open Access Journals (Sweden)

    Baah Aye Kusi

    2015-09-01

    Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden  were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.                

  3. Kinetics of protein–ligand unbinding: Predicting pathways, rates, and rate-limiting steps

    Science.gov (United States)

    Tiwary, Pratyush; Limongelli, Vittorio; Salvalaglio, Matteo; Parrinello, Michele

    2015-01-01

    The ability to predict the mechanisms and the associated rate constants of protein–ligand unbinding is of great practical importance in drug design. In this work we demonstrate how a recently introduced metadynamics-based approach allows exploration of the unbinding pathways, estimation of the rates, and determination of the rate-limiting steps in the paradigmatic case of the trypsin–benzamidine system. Protein, ligand, and solvent are described with full atomic resolution. Using metadynamics, multiple unbinding trajectories that start with the ligand in the crystallographic binding pose and end with the ligand in the fully solvated state are generated. The unbinding rate koff is computed from the mean residence time of the ligand. Using our previously computed binding affinity we also obtain the binding rate kon. Both rates are in agreement with reported experimental values. We uncover the complex pathways of unbinding trajectories and describe the critical rate-limiting steps with unprecedented detail. Our findings illuminate the role played by the coupling between subtle protein backbone fluctuations and the solvation by water molecules that enter the binding pocket and assist in the breaking of the shielded hydrogen bonds. We expect our approach to be useful in calculating rates for general protein–ligand systems and a valid support for drug design. PMID:25605901

  4. Lack of motor prediction, rather than perceptual conflict, evokes an odd sensation upon stepping onto a stopped escalator

    Science.gov (United States)

    Gomi, Hiroaki; Sakurada, Takeshi; Fukui, Takao

    2014-01-01

    When stepping onto a stopped escalator, we often perceive an “odd sensation” that is never felt when stepping onto stairs. The sight of an escalator provides a strong contextual cue that, in expectation of the backward acceleration when stepping on, triggers an anticipatory forward postural adjustment driven by a habitual and implicit motor process. Here we contrast two theories about why this postural change leads to an odd sensation. The first theory links the odd sensation to a lack of sensorimotor prediction from all low-level implicit motor processes. The second theory links the odd sensation to the high-level conflict between the conscious awareness that the escalator is stopped and the implicit perception that evokes an endogenous motor program specific to a moving escalator. We show very similar postural changes can also arise from reflexive responses to visual stimuli, such as contracting/expanding optic flow fields, and that these reflexive responses produce similar odd sensations to the stopped escalator. We conclude that the high-level conflict is not necessary for such sensations. In contrast, the implicitly driven behavioral change itself essentially leads to the odd sensation in motor perception since the unintentional change may be less attributable to self-generated action because of a lack of motor predictions. PMID:24688460

  5. Extended Kalman filter (EKF) application in vitamin C two-step fermentation process.

    Science.gov (United States)

    Wei, D; Yuan, W; Yuan, Z; Yin, G; Chen, M

    1993-01-01

    Based on kinetic model study of vitamin C two-step fermentation, the extended Kalman filter (EKF) theory is conducted for studying the process which is disturbed by white noise to some extent caused by the model, the fermentation system and operation fluctuation. EKF shows that calculated results from estimated process parameters agree with the experimental results considerably better than model prediction without using estimated parameters. Parameter analysis gives a better understanding of the kinetics and provides a basis for state estimation and state prediction.

  6. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  7. A system identification approach for developing model predictive controllers of antibody quality attributes in cell culture processes.

    Science.gov (United States)

    Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey

    2017-11-01

    As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  8. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  9. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Science.gov (United States)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  10. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  11. Improved Finite-Control-Set Model Predictive Control for Cascaded H-Bridge Inverters

    Directory of Open Access Journals (Sweden)

    Roh Chan

    2018-02-01

    Full Text Available In multilevel cascaded H-bridge (CHB inverters, the number of voltage vectors generated by the inverter quickly increases with increasing voltage level. However, because the sampling period is short, it is difficult to consider all the vectors as the voltage level increases. This paper proposes a model predictive control algorithm with reduced computational complexity and fast dynamic response for CHB inverters. The proposed method presents a robust approach to interpret a next step as a steady or transient state by comparing an optimal voltage vector at a present step and a reference voltage vector at the next step. During steady state, only an optimal vector at a present step and its adjacent vectors are considered as a candidate-vector subset. On the other hand, this paper defines a new candidate vector subset for the transient state, which consists of more vectors than those in the subset used for the steady state for fast dynamic speed; however, the vectors are less than all the possible vectors generated by the CHB inverter, for calculation simplicity. In conclusion, the proposed method can reduce the computational complexity without significantly deteriorating the dynamic responses.

  12. Thermal Phase Variations of WASP-12b: Defying Predictions

    Science.gov (United States)

    Cowan, Nicolas B.; Machalek, Pavel; Croll, Bryce; Shekhtman, Louis M.; Burrows, Adam; Deming, Drake; Greene, Tom; Hora, Joseph L.

    2012-01-01

    We report Warm Spitzer full-orbit phase observations of WASP-12b at 3.6 and 4.5 micrometers. This extremely inflated hot Jupiter is thought to be overflowing its Roche lobe, undergoing mass loss and accretion onto its host star, and has been claimed to have a C/O ratio in excess of unity. We are able to measure the transit depths, eclipse depths, thermal and ellipsoidal phase variations at both wavelengths. The large-amplitude phase variations, combined with the planet's previously measured dayside spectral energy distribution, are indicative of non-zero Bond albedo and very poor day-night heat redistribution. The transit depths in the mid-infrared-(R(sub p)/R(sub *))(sup 2) = 0.0123(3) and 0.0111(3) at 3.6 and 4.5 micrometers, respectively-indicate that the atmospheric opacity is greater at 3.6 than at 4.5 micrometers, in disagreement with model predictions, irrespective of C/O ratio. The secondary eclipse depths are consistent with previous studies: F(sub day)/F(sub *) = 0.0038(4) and 0.0039(3) at 3.6 and 4.5 micrometers, respectively. We do not detect ellipsoidal variations at 3.6 micrometers, but our parameter uncertainties-estimated via prayer-bead Monte Carlo-keep this non-detection consistent with model predictions. At 4.5 micrometers, on the other hand, we detect ellipsoidal variations that are much stronger than predicted. If interpreted as a geometric effect due to the planet's elongated shape, these variations imply a 3:2 ratio for the planet's longest:shortest axes and a relatively bright day-night terminator. If we instead presume that the 4.5 micrometer ellipsoidal variations are due to uncorrected systematic noise and we fix the amplitude of the variations to zero, the best-fit 4.5 micrometer transit depth becomes commensurate with the 3.6 micrometer depth, within the uncertainties. The relative transit depths are then consistent with a solar composition and short scale height at the terminator. Assuming zero ellipsoidal variations also yields a much

  13. THERMAL PHASE VARIATIONS OF WASP-12b: DEFYING PREDICTIONS

    International Nuclear Information System (INIS)

    Cowan, Nicolas B.; Shekhtman, Louis M.; Machalek, Pavel; Croll, Bryce; Burrows, Adam; Deming, Drake; Greene, Tom; Hora, Joseph L.

    2012-01-01

    We report Warm Spitzer full-orbit phase observations of WASP-12b at 3.6 and 4.5 μm. This extremely inflated hot Jupiter is thought to be overflowing its Roche lobe, undergoing mass loss and accretion onto its host star, and has been claimed to have a C/O ratio in excess of unity. We are able to measure the transit depths, eclipse depths, thermal and ellipsoidal phase variations at both wavelengths. The large-amplitude phase variations, combined with the planet's previously measured dayside spectral energy distribution, are indicative of non-zero Bond albedo and very poor day-night heat redistribution. The transit depths in the mid-infrared—(R p /R * ) 2 = 0.0123(3) and 0.0111(3) at 3.6 and 4.5 μm, respectively—indicate that the atmospheric opacity is greater at 3.6 than at 4.5 μm, in disagreement with model predictions, irrespective of C/O ratio. The secondary eclipse depths are consistent with previous studies: F day /F * = 0.0038(4) and 0.0039(3) at 3.6 and 4.5 μm, respectively. We do not detect ellipsoidal variations at 3.6 μm, but our parameter uncertainties—estimated via prayer-bead Monte Carlo—keep this non-detection consistent with model predictions. At 4.5 μm, on the other hand, we detect ellipsoidal variations that are much stronger than predicted. If interpreted as a geometric effect due to the planet's elongated shape, these variations imply a 3:2 ratio for the planet's longest:shortest axes and a relatively bright day-night terminator. If we instead presume that the 4.5 μm ellipsoidal variations are due to uncorrected systematic noise and we fix the amplitude of the variations to zero, the best-fit 4.5 μm transit depth becomes commensurate with the 3.6 μm depth, within the uncertainties. The relative transit depths are then consistent with a solar composition and short scale height at the terminator. Assuming zero ellipsoidal variations also yields a much deeper 4.5 μm eclipse depth, consistent with a solar composition and modest

  14. A Step-indexed Semantic Model of Types for the Call-by-Name Lambda Calculus

    OpenAIRE

    Meurer, Benedikt

    2011-01-01

    Step-indexed semantic models of types were proposed as an alternative to purely syntactic safety proofs using subject-reduction. Building upon the work by Appel and others, we introduce a generalized step-indexed model for the call-by-name lambda calculus. We also show how to prove type safety of general recursion in our call-by-name model.

  15. Stochastic models for predicting environmental impact in aquatic ecosystems

    International Nuclear Information System (INIS)

    Stewart-Oaten, A.

    1986-01-01

    The purpose of stochastic predictions are discussed in relation to the environmental impacts of nuclear power plants on aquatic ecosystems. One purpose is to aid in making rational decisions about whether a power plant should be built, where, and how it should be designed. The other purpose is to check on the models themselves in the light of what eventually happens. The author discusses the role or statistical decision theory in the decision-making problem. Various types of stochastic models and their problems are presented. In addition some suggestions are made for generating usable stochastic models, and checking and improving on them. 12 references

  16. Changes in step-width during dual-task walking predicts falls.

    Science.gov (United States)

    Nordin, E; Moe-Nilssen, R; Ramnemark, A; Lundin-Olsson, L

    2010-05-01

    The aim was to evaluate whether gait pattern changes between single- and dual-task conditions were associated with risk of falling in older people. Dual-task cost (DTC) of 230 community living, physically independent people, 75 years or older, was determined with an electronic walkway. Participants were followed up each month for 1 year to record falls. Mean and variability measures of gait characteristics for 5 dual-task conditions were compared to single-task walking for each participant. Almost half (48%) of the participants fell at least once during follow-up. Risk of falling increased in individuals where DTC for performing a subtraction task demonstrated change in mean step-width compared to single-task walking. Risk of falling decreased in individuals where DTC for carrying a cup and saucer demonstrated change compared to single-task walking in mean step-width, mean step-time, and step-length variability. Degree of change in gait characteristics related to a change in risk of falling differed between measures. Prognostic guidance for fall risk was found for the above DTCs in mean step-width with a negative likelihood ratio of 0.5 and a positive likelihood ratio of 2.3, respectively. Findings suggest that changes in step-width, step-time, and step-length with dual tasking may be related to future risk of falling. Depending on the nature of the second task, DTC may indicate either an increased risk of falling, or a protective strategy to avoid falling. Copyright 2010. Published by Elsevier B.V.

  17. The response of human thermal sensation and its prediction to temperature step-change (cool-neutral-cool.

    Directory of Open Access Journals (Sweden)

    Xiuyuan Du

    Full Text Available This paper reports on studies of the effect of temperature step-change (between a cool and a neutral environment on human thermal sensation and skin temperature. Experiments with three temperature conditions were carried out in a climate chamber during the period in winter. Twelve subjects participated in the experiments simulating moving inside and outside of rooms or cabins with air conditioning. Skin temperatures and thermal sensation were recorded. Results showed overshoot and asymmetry of TSV due to the step-change. Skin temperature changed immediately when subjects entered a new environment. When moving into a neutral environment from cool, dynamic thermal sensation was in the thermal comfort zone and overshoot was not obvious. Air-conditioning in a transitional area should be considered to limit temperature difference to not more than 5°C to decrease the unacceptability of temperature step-change. The linear relationship between thermal sensation and skin temperature or gradient of skin temperature does not apply in a step-change environment. There is a significant linear correlation between TSV and Qloss in the transient environment. Heat loss from the human skin surface can be used to predict dynamic thermal sensation instead of the heat transfer of the whole human body.

  18. The Response of Human Thermal Sensation and Its Prediction to Temperature Step-Change (Cool-Neutral-Cool)

    Science.gov (United States)

    Du, Xiuyuan; Li, Baizhan; Liu, Hong; Yang, Dong; Yu, Wei; Liao, Jianke; Huang, Zhichao; Xia, Kechao

    2014-01-01

    This paper reports on studies of the effect of temperature step-change (between a cool and a neutral environment) on human thermal sensation and skin temperature. Experiments with three temperature conditions were carried out in a climate chamber during the period in winter. Twelve subjects participated in the experiments simulating moving inside and outside of rooms or cabins with air conditioning. Skin temperatures and thermal sensation were recorded. Results showed overshoot and asymmetry of TSV due to the step-change. Skin temperature changed immediately when subjects entered a new environment. When moving into a neutral environment from cool, dynamic thermal sensation was in the thermal comfort zone and overshoot was not obvious. Air-conditioning in a transitional area should be considered to limit temperature difference to not more than 5°C to decrease the unacceptability of temperature step-change. The linear relationship between thermal sensation and skin temperature or gradient of skin temperature does not apply in a step-change environment. There is a significant linear correlation between TSV and Qloss in the transient environment. Heat loss from the human skin surface can be used to predict dynamic thermal sensation instead of the heat transfer of the whole human body. PMID:25136808

  19. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  20. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Science.gov (United States)

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  1. How to Use the Actor-Partner Interdependence Model (APIM To Estimate Different Dyadic Patterns in MPLUS: A Step-by-Step Tutorial

    Directory of Open Access Journals (Sweden)

    Fitzpatrick, Josée

    2016-01-01

    Full Text Available Dyadic data analysis with distinguishable dyads assesses the variance, not only between dyads, but also within the dyad when members are distinguishable on a known variable. In past research, the Actor-Partner Interdependence Model (APIM has been the statistical model of choice in order to take into account this interdependence. Although this method has received considerable interest in the past decade, to our knowledge, no specific guide or tutorial exists to describe how to test an APIM model. In order to close this gap, this article will provide researchers with a step-by-step tutorial for assessing the most recent advancements of the APIM with the use of structural equation modeling (SEM. The present tutorial will also utilize the statistical program MPLUS.

  2. Prediction of motivational impairment: 12-month follow-up of the randomized-controlled trial on extended early intervention for first-episode psychosis.

    Science.gov (United States)

    Chang, W C; Kwong, V W Y; Chan, G H K; Jim, O T T; Lau, E S K; Hui, C L M; Chan, S K W; Lee, E H M; Chen, E Y H

    2017-03-01

    Amotivation is prevalent in first-episode psychosis (FEP) patients and is a major determinant of functional outcome. Prediction of amotivation in the early stage of psychosis, however, is under-studied. We aimed to prospectively examine predictors of amotivation in FEP patients in a randomized-controlled trial comparing a 1-year extension of early intervention (Extended EI, 3-year EI) with step-down psychiatric care (SC, 2-year EI). One hundred sixty Chinese patents were recruited from a specialized EI program for FEP in Hong Kong after they have completed this 2-year EI service, randomly allocated to Extended EI or SC, and followed up for 12 months. Assessments on premorbid adjustment, onset profiles, baseline symptom severity and treatment characteristics were conducted. Data analysis was based on 156 subjects who completed follow-up assessments. Amotivation at 12-month follow-up was associated with premorbid adjustment, allocated treatment condition, and levels of positive symptoms, disorganization, amotivation, diminished expression (DE) and depression at study intake. Hierarchical multiple regression analysis revealed that Extended EI and lower levels of DE independently predicted better outcome on 12-month amotivation. Our findings indicate a potentially critical therapeutic role of an extended specialized EI on alleviating motivational impairment in FEP patients. The longer-term effect of Extended EI on amotivation merits further investigation. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  3. Robust human body model injury prediction in simulated side impact crashes.

    Science.gov (United States)

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  4. Neural Network Modeling to Predict Shelf Life of Greenhouse Lettuce

    Directory of Open Access Journals (Sweden)

    Wei-Chin Lin

    2009-04-01

    Full Text Available Greenhouse-grown butter lettuce (Lactuca sativa L. can potentially be stored for 21 days at constant 0°C. When storage temperature was increased to 5°C or 10°C, shelf life was shortened to 14 or 10 days, respectively, in our previous observations. Also, commercial shelf life of 7 to 10 days is common, due to postharvest temperature fluctuations. The objective of this study was to establish neural network (NN models to predict the remaining shelf life (RSL under fluctuating postharvest temperatures. A box of 12 - 24 lettuce heads constituted a sample unit. The end of the shelf life of each head was determined when it showed initial signs of decay or yellowing. Air temperatures inside a shipping box were recorded. Daily average temperatures in storage and averaged shelf life of each box were used as inputs, and the RSL was modeled as an output. An R2 of 0.57 could be observed when a simple NN structure was employed. Since the "future" (or remaining storage temperatures were unavailable at the time of making a prediction, a second NN model was introduced to accommodate a range of future temperatures and associated shelf lives. Using such 2-stage NN models, an R2 of 0.61 could be achieved for predicting RSL. This study indicated that NN modeling has potential for cold chain quality control and shelf life prediction.

  5. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  6. Predicting growth of the healthy infant using a genome scale metabolic model.

    Science.gov (United States)

    Nilsson, Avlant; Mardinoglu, Adil; Nielsen, Jens

    2017-01-01

    An estimated 165 million children globally have stunted growth, and extensive growth data are available. Genome scale metabolic models allow the simulation of molecular flux over each metabolic enzyme, and are well adapted to analyze biological systems. We used a human genome scale metabolic model to simulate the mechanisms of growth and integrate data about breast-milk intake and composition with the infant's biomass and energy expenditure of major organs. The model predicted daily metabolic fluxes from birth to age 6 months, and accurately reproduced standard growth curves and changes in body composition. The model corroborates the finding that essential amino and fatty acids do not limit growth, but that energy is the main growth limiting factor. Disruptions to the supply and demand of energy markedly affected the predicted growth, indicating that elevated energy expenditure may be detrimental. The model was used to simulate the metabolic effect of mineral deficiencies, and showed the greatest growth reduction for deficiencies in copper, iron, and magnesium ions which affect energy production through oxidative phosphorylation. The model and simulation method were integrated to a platform and shared with the research community. The growth model constitutes another step towards the complete representation of human metabolism, and may further help improve the understanding of the mechanisms underlying stunting.

  7. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  8. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  9. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  10. Stochastic rainfall-runoff forecasting: parameter estimation, multi-step prediction, and evaluation of overflow risk

    DEFF Research Database (Denmark)

    Löwe, Roland; Mikkelsen, Peter Steen; Madsen, Henrik

    2014-01-01

    Probabilistic runoff forecasts generated by stochastic greybox models can be notably useful for the improvement of the decision-making process in real-time control setups for urban drainage systems because the prediction risk relationships in these systems are often highly nonlinear. To date...... the identification of models for cases with noisy in-sewer observations. For the prediction of the overflow risk, no improvement was demonstrated through the application of stochastic forecasts instead of point predictions, although this result is thought to be caused by the notably simplified setup used...

  11. A global high-resolution model experiment on the predictability of the atmosphere

    Science.gov (United States)

    Judt, F.

    2016-12-01

    Forecasting high-impact weather phenomena is one of the most important aspects of numerical weather prediction (NWP). Over the last couple of years, a tremendous increase in computing power has facilitated the advent of global convection-resolving NWP models, which allow for the seamless prediction of weather from local to planetary scales. Unfortunately, the predictability of specific meteorological phenomena in these models is not very well known. This raises questions about which forecast problems are potentially tractable, and what is the value of global convection-resolving model predictions for the end user. To address this issue, we use the Yellowstone supercomputer to conduct a global high-resolution predictability experiment with the recently developed Model for Prediction Across Scales (MPAS). The computing power of Yellowstone enables the model to run at a globally uniform resolution of 4 km with 55 vertical levels (>2 billion grid cells). These simulations, which require 3 million core-hours for the entire experiment, allow for the explicit treatment of organized deep moist convection (i.e., thunderstorm systems). Resolving organized deep moist convection alleviates grave limitations of previous predictability studies, which either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. By computing the error growth characteristics in a set of "identical twin" model runs, the experiment will clarify the intrinsic predictability limits of atmospheric phenomena on a wide range of scales, from severe thunderstorms to global-scale wind patterns that affect the distribution of tropical rainfall. Although a major task by itself, this study is intended to be exploratory work for a future predictability experiment going beyond of what has so far been feasible. We hope to use CISL's new Cheyenne supercomputer to conduct a similar predictability experiments on a global mesh with 1-2 km resolution. This

  12. The Predictive Effect of Big Five Factor Model on Social Reactivity ...

    African Journals Online (AJOL)

    The study tested a model of providing a predictive explanation of Big Five Factor on social reactivity among secondary school adolescents of Cross River State, Nigeria. A sample of 200 students randomly selected across 12 public secondary schools in the State participated in the study (120 male and 80 female). Data ...

  13. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Low-lying 1/2- hidden strange pentaquark states in the constituent quark model

    Science.gov (United States)

    Li, Hui; Wu, Zong-Xiu; An, Chun-Sheng; Chen, Hong

    2017-12-01

    We investigate the spectrum of the low-lying 1/2- hidden strange pentaquark states, employing the constituent quark model, and looking at two ways within that model of mediating the hyperfine interaction between quarks - Goldstone boson exchange and one gluon exchange. Numerical results show that the lowest 1/2- hidden strange pentaquark state in the Goldstone boson exchange model lies at ˜1570 MeV, so this pentaquark configuration may form a notable component in S 11(1535) if the Goldstone boson exchange model is applied. This is consistent with the prediction that S 11(1535) couples very strongly to strangeness channels. Supported by National Natural Science Foundation of China (11675131, 11645002), Chongqing Natural Science Foundation (cstc2015jcyjA00032) and Fundamental Research Funds for the Central Universities (SWU115020)

  15. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  16. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  17. Predicting the 6-month risk of severe hypoglycemia among adults with diabetes: Development and external validation of a prediction model.

    Science.gov (United States)

    Schroeder, Emily B; Xu, Stan; Goodrich, Glenn K; Nichols, Gregory A; O'Connor, Patrick J; Steiner, John F

    2017-07-01

    To develop and externally validate a prediction model for the 6-month risk of a severe hypoglycemic event among individuals with pharmacologically treated diabetes. The development cohort consisted of 31,674 Kaiser Permanente Colorado members with pharmacologically treated diabetes (2007-2015). The validation cohorts consisted of 38,764 Kaiser Permanente Northwest members and 12,035 HealthPartners members. Variables were chosen that would be available in electronic health records. We developed 16-variable and 6-variable models, using a Cox counting model process that allows for the inclusion of multiple 6-month observation periods per person. Across the three cohorts, there were 850,992 6-month observation periods, and 10,448 periods with at least one severe hypoglycemic event. The six-variable model contained age, diabetes type, HgbA1c, eGFR, history of a hypoglycemic event in the prior year, and insulin use. Both prediction models performed well, with good calibration and c-statistics of 0.84 and 0.81 for the 16-variable and 6-variable models, respectively. In the external validation cohorts, the c-statistics were 0.80-0.84. We developed and validated two prediction models for predicting the 6-month risk of hypoglycemia. The 16-variable model had slightly better performance than the 6-variable model, but in some practice settings, use of the simpler model may be preferred. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Updating the CHAOS series of field models using Swarm data and resulting candidate models for IGRF-12

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars

    th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model...... therefore conclude that Swarm data is suitable for building high-resolution models of the large-scale internal field, and proceed to extract IGRF-12 candidate models for the main field in epochs 2010 and 2015, as well as the predicted linear secular variarion for 2015-2020. The properties of these IGRF...... candidate models are briefly presented....

  19. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  20. Identifying elderly people at risk for cognitive decline by using the 2-step test.

    Science.gov (United States)

    Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki

    2018-01-01

    [Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.

  1. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  2. Comparison of Different Turbulence Models for Numerical Simulation of Pressure Distribution in V-Shaped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Zhaoliang Bai

    2017-01-01

    Full Text Available V-shaped stepped spillway is a new shaped stepped spillway, and the pressure distribution is quite different from that of the traditional stepped spillway. In this paper, five turbulence models were used to simulate the pressure distribution in the skimming flow regimes. Through comparing with the physical value, the realizable k-ε model had better precision in simulating the pressure distribution. Then, the flow pattern of V-shaped and traditional stepped spillways was given to illustrate the unique pressure distribution using realizable k-ε turbulence model.

  3. Motivational intervention to enhance post-detoxification 12-Step group affiliation: a randomized controlled trial.

    Science.gov (United States)

    Vederhus, John-Kåre; Timko, Christine; Kristensen, Oistein; Hjemdahl, Bente; Clausen, Thomas

    2014-05-01

    To compare a motivational intervention (MI) focused on increasing involvement in 12-Step groups (TSGs; e.g. Alcoholics Anonymous) versus brief advice (BA) to attend TSGs. Patients were assigned randomly to either the MI or BA condition, and followed-up at 6 months after discharge. One hundred and forty substance use disorder (SUD) patients undergoing in-patient detoxification (detox) in Norway. The primary outcome was TSG affiliation measured with the Alcoholics Anonymous Affiliation Scale (AAAS), which combines meeting attendance and TSG involvement. Substance use and problem severity were also measured. At 6 months after treatment, compared with the BA group, the MI group had higher TSG affiliation [0.91 point higher AAAS score; 95% confidence interval (CI) = 0.04 to 1.78; P = 0.041]. The MI group reported 3.5 fewer days of alcohol use (2.1 versus 5.6 days; 95% CI = -6.5 to -0.6; P = 0.020) and 4.0 fewer days of drug use (3.8 versus 7.8 days; 95% CI = -7.5 to -0.4; P = 0.028); however, abstinence rates and severity scores did not differ between conditions. Analyses controlling for duration of in-patient treatment did not alter the results. A motivational intervention in an in-patient detox ward was more successful than brief advice in terms of patient engagement in 12-Step groups and reduced substance use at 6 months after discharge. There is a potential benefit of adding a maintenance-focused element to standard detox. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  4. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  5. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  6. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  7. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  8. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  9. Development of ANN Model for Wind Speed Prediction as a Support for Early Warning System

    Directory of Open Access Journals (Sweden)

    Ivan Marović

    2017-01-01

    Full Text Available The impact of natural disasters increases every year with more casualties and damage to property and the environment. Therefore, it is important to prevent consequences by implementation of the early warning system (EWS in order to announce the possibility of the harmful phenomena occurrence. In this paper, focus is placed on the implementation of the EWS on the micro location in order to announce possible harmful phenomena occurrence caused by wind. In order to predict such phenomena (wind speed, an artificial neural network (ANN prediction model is developed. The model is developed on the basis of the input data obtained by local meteorological station on the University of Rijeka campus area in the Republic of Croatia. The prediction model is validated and evaluated by visual and common calculation approaches, after which it was found that it is possible to perform very good wind speed prediction for time steps Δt=1 h, Δt=3 h, and Δt=8 h. The developed model is implemented in the EWS as a decision support for improvement of the existing “procedure plan in a case of the emergency caused by stormy wind or hurricane, snow and occurrence of the ice on the University of Rijeka campus.”

  10. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    Science.gov (United States)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with

  11. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  12. QSAR classification models for the prediction of endocrine disrupting activity of brominated flame retardants.

    Science.gov (United States)

    Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-06-15

    The identification of potential endocrine disrupting (ED) chemicals is an important task for the scientific community due to their diffusion in the environment; the production and use of such compounds will be strictly regulated through the authorization process of the REACH regulation. To overcome the problem of insufficient experimental data, the quantitative structure-activity relationship (QSAR) approach is applied to predict the ED activity of new chemicals. In the present study QSAR classification models are developed, according to the OECD principles, to predict the ED potency for a class of emerging ubiquitary pollutants, viz. brominated flame retardants (BFRs). Different endpoints related to ED activity (i.e. aryl hydrocarbon receptor agonism and antagonism, estrogen receptor agonism and antagonism, androgen and progesterone receptor antagonism, T4-TTR competition, E2SULT inhibition) are modeled using the k-NN classification method. The best models are selected by maximizing the sensitivity and external predictive ability. We propose simple QSARs (based on few descriptors) characterized by internal stability, good predictive power and with a verified applicability domain. These models are simple tools that are applicable to screen BFRs in relation to their ED activity, and also to design safer alternatives, in agreement with the requirements of REACH regulation at the authorization step. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. MODEL LATIHAN KETERAMPILAN GERAK PENCAK SILAT ANAK USIA 9-12 TAHUN

    Directory of Open Access Journals (Sweden)

    Bayu Iswana

    2013-04-01

    Abstract This study aims to produce a training model for pencak silat (self-defence movement skills of children aged 9-12 years. This research and development (R & D study was conducted by adapting the R & D steps by Borg & Gall (1983, p.775, i.e. (1 information collection, (2 information result analysis, (3 preliminary product development, (4 expert validation and stage 1 revision, (5 a small-scale tryout and a revision, (6 a large-scale tryout and stage 2 revision, and (7 final product. The small-scale tryout was conducted by involving participants of Tapak Suci SD N I Padokan and Tapak Suci SD Muhamadiyah Demangan. The large-scale tryout was conducted by involving participants of Pagar Nusa Sleman and Pagar Nusa Yogyakarta City carrying out training in SD N Demangan and Persatuan Hati Bantul. The data collecting instruments included (1 interviews, (2 a score scale, (3 a model observation guide, and (4 a model effectiveness guide. The data were anlyzed using the quantitative and qualitative descriptive techniques. The contents of the product consist of six training models, i.e. (1 kucing dan tikus, (2 bentengan, (3 gobak sodor, (4 jala ikan, (5 berburu burung  and (6 elang dan anak ayam. The experts conclude that in the model there are cognitive, affective, and psychomotor aspects so that it is appropriate and effective to use. Keywords: model, training, pencak silat, children aged 9 – 12 years

  14. Evaluation on the model of performance predictions for on-line monitoring system for combined-cycle power plant

    International Nuclear Information System (INIS)

    Kim, Si Moon

    2002-01-01

    This paper presents the simulation model developed to predict design and off-design performance of an actual combined cycle power plant(S-Station in Korea), which would be running combined with on-line performance monitoring system in an on-line real-time fashion. The first step in thermal performance analysis is to build an accurate performance model of the power plant, in order to achieve this goal, GateCycle program has been employed in developing the model. This developed models predict design and off-design performance with a precision of one percent over a wide range of operating conditions so that on-line real-time performance monitoring can accurately establish both current performance and expected performance and also help the operator identify problems before they would be noticed

  15. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  16. On various metrics used for validation of predictive QSAR models with applications in virtual screening and focused library design.

    Science.gov (United States)

    Roy, Kunal; Mitra, Indrani

    2011-07-01

    Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.

  17. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  18. Metallographic assessment of Al-12Si high-pressure die casting escalator steps.

    Science.gov (United States)

    Vander Voort, George Frederic; Suárez-Peña, Beatriz; Asensio-Lozano, Juan

    2014-10-01

    A microstructural characterization study was performed on high-pressure die cast specimens extracted from escalator steps manufactured from an Al-12 wt.% Si alloy designed for structural applications. Black and white, color light optical imaging and scanning electron microscopy techniques were used to conduct the microstructural analysis. Most regions in the samples studied contained globular-rosette primary α-Al grains surrounded by an Al-Si eutectic aggregate, while primary dendritic α-Al grains were present in the surface layer. This dendritic microstructure was observed in the regions where the melt did not impinge directly on the die surface during cavity filling. Consequently, microstructures in the surface layer were nonuniform. Utilizing physical metallurgy principles, these results were analyzed in terms of the applied pressure and filling velocity during high-pressure die casting. The effects of these parameters on solidification at different locations of the casting are discussed.

  19. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  20. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  1. A Novel Risk prediction Model for Patients with Combined Hepatocellular-Cholangiocarcinoma.

    Science.gov (United States)

    Tian, Meng-Xin; He, Wen-Jun; Liu, Wei-Ren; Yin, Jia-Cheng; Jin, Lei; Tang, Zheng; Jiang, Xi-Fei; Wang, Han; Zhou, Pei-Yun; Tao, Chen-Yang; Ding, Zhen-Bin; Peng, Yuan-Fei; Dai, Zhi; Qiu, Shuang-Jian; Zhou, Jian; Fan, Jia; Shi, Ying-Hong

    2018-01-01

    Backgrounds: Regarding the difficulty of CHC diagnosis and potential adverse outcomes or misuse of clinical therapies, an increasing number of patients have undergone liver transplantation, transcatheter arterial chemoembolization (TACE) or other treatments. Objective: To construct a convenient and reliable risk prediction model for identifying high-risk individuals with combined hepatocellular-cholangiocarcinoma (CHC). Methods: 3369 patients who underwent surgical resection for liver cancer at Zhongshan Hospital were enrolled in this study. The epidemiological and clinical characteristics of the patients were collected at the time of tumor diagnosis. Variables ( P model discrimination. Calibration was performed using the Hosmer-Lemeshow test and a calibration curve. Internal validation was performed using a bootstrapping approach. Results: Among the entire study population, 250 patients (7.42%) were pathologically defined with CHC. Age, HBcAb, red blood cells (RBC), blood urea nitrogen (BUN), AFP, CEA and portal vein tumor thrombus (PVTT) were included in the final risk prediction model (area under the curve, 0.69; 95% confidence interval, 0.51-0.77). Bootstrapping validation presented negligible optimism. When the risk threshold of the prediction model was set at 20%, 2.73% of the patients diagnosed with liver cancer would be diagnosed definitely, which could identify CHC patients with 12.40% sensitivity, 98.04% specificity, and a positive predictive value of 33.70%. Conclusions: Herein, the study established a risk prediction model which incorporates the clinical risk predictors and CT/MRI-presented PVTT status that could be adopted to facilitate the diagnosis of CHC patients preoperatively.

  2. Short-term and long-term thermal prediction of a walking beam furnace using neuro-fuzzy techniques

    Directory of Open Access Journals (Sweden)

    Banadaki Hamed Dehghan

    2015-01-01

    Full Text Available The walking beam furnace (WBF is one of the most prominent process plants often met in an alloy steel production factory and characterized by high non-linearity, strong coupling, time delay, large time-constant and time variation in its parameter set and structure. From another viewpoint, the WBF is a distributed-parameter process in which the distribution of temperature is not uniform. Hence, this process plant has complicated non-linear dynamic equations that have not worked out yet. In this paper, we propose one-step non-linear predictive model for a real WBF using non-linear black-box sub-system identification based on locally linear neuro-fuzzy (LLNF model. Furthermore, a multi-step predictive model with a precise long prediction horizon (i.e., ninety seconds ahead, developed with application of the sequential one-step predictive models, is also presented for the first time. The locally linear model tree (LOLIMOT which is a progressive tree-based algorithm trains these models. Comparing the performance of the one-step LLNF predictive models with their associated models obtained through least squares error (LSE solution proves that all operating zones of the WBF are of non-linear sub-systems. The recorded data from Iran Alloy Steel factory is utilized for identification and evaluation of the proposed neuro-fuzzy predictive models of the WBF process.

  3. Leveraging electronic health records for predictive modeling of post-surgical complications.

    Science.gov (United States)

    Weller, Grant B; Lovely, Jenna; Larson, David W; Earnshaw, Berton A; Huebner, Marianne

    2017-01-01

    Hospital-specific electronic health record systems are used to inform clinical practice about best practices and quality improvements. Many surgical centers have developed deterministic clinical decision rules to discover adverse events (e.g. postoperative complications) using electronic health record data. However, these data provide opportunities to use probabilistic methods for early prediction of adverse health events, which may be more informative than deterministic algorithms. Electronic health record data from a set of 9598 colorectal surgery cases from 2010 to 2014 were used to predict the occurrence of selected complications including surgical site infection, ileus, and bleeding. Consistent with previous studies, we find a high rate of missing values for both covariates and complication information (4-90%). Several machine learning classification methods are trained on an 80% random sample of cases and tested on a remaining holdout set. Predictive performance varies by complication, although an area under the receiver operating characteristic curve as high as 0.86 on testing data was achieved for bleeding complications, and accuracy for all complications compares favorably to existing clinical decision rules. Our results confirm that electronic health records provide opportunities for improved risk prediction of surgical complications; however, consideration of data quality and consistency standards is an important step in predictive modeling with such data.

  4. Stepped approach for prediction of syndrome Z in patients attending sleep clinic: a north Indian hospital-based study.

    Science.gov (United States)

    Agrawal, Swastik; Sharma, Surendra Kumar; Sreenivas, Vishnubhatla; Lakshmy, Ramakrishnan; Mishra, Hemant K

    2012-09-01

    Syndrome Z is the occurrence of metabolic syndrome (MS) with obstructive sleep apnea. Knowledge of its risk factors is useful to screen patients requiring further evaluation for syndrome Z. Consecutive patients referred from sleep clinic undergoing polysomnography in the Sleep Laboratory of AIIMS Hospital, New Delhi were screened between June 2008 and May 2010, and 227 patients were recruited. Anthropometry, body composition analysis, blood pressure, fasting blood sugar, and lipid profile were measured. MS was defined using the National Cholesterol Education Program (adult treatment panel III) criteria, with Asian cutoff values for abdominal obesity. Prevalence of MS and syndrome Z was 74% and 65%, respectively. Age, percent body fat, excessive daytime sleepiness (EDS), and ΔSaO(2) (defined as difference between baseline and minimum SaO(2) during polysomnography) were independently associated with syndrome Z. Using a cutoff of 15% for level of desaturation, the stepped predictive score using these risk factors had sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 73%, 84%, and 61%, respectively for the diagnosis of syndrome Z. It correctly characterized presence of syndrome Z 75% of the time and obviated need for detailed evaluation in 42% of the screened subjects. A large proportion of patients presenting to sleep clinics have MS and syndrome Z. Age, percent body fat, EDS, and ΔSaO(2) are independent risk factors for syndrome Z. A stepped predictive score using these parameters is cost-effective and useful in diagnosing syndrome Z in resource-limited settings.

  5. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  6. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  7. Color Shift Modeling of Light-Emitting Diode Lamps in Step-Loaded Stress Testing

    OpenAIRE

    Cai, Miao; Yang, Daoguo; Huang, J.; Zhang, Maofen; Chen, Xianping; Liang, Caihang; Koh, S.W.; Zhang, G.Q.

    2017-01-01

    The color coordinate shift of light-emitting diode (LED) lamps is investigated by running three stress-loaded testing methods, namely step-up stress accelerated degradation testing, step-down stress accelerated degradation testing, and constant stress accelerated degradation testing. A power model is proposed as the statistical model of the color shift (CS) process of LED products. Consequently, a CS mechanism constant is obtained for detecting the consistency of CS mechanisms among various s...

  8. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  9. Model-on-Demand Predictive Control for Nonlinear Hybrid Systems With Application to Adaptive Behavioral Interventions

    Science.gov (United States)

    Nandola, Naresh N.; Rivera, Daniel E.

    2011-01-01

    This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087

  10. A hypothetical model for predicting the toxicity of high aspect ratio nanoparticles (HARN)

    Science.gov (United States)

    Tran, C. L.; Tantra, R.; Donaldson, K.; Stone, V.; Hankin, S. M.; Ross, B.; Aitken, R. J.; Jones, A. D.

    2011-12-01

    The ability to predict nanoparticle (dimensional structures which are less than 100 nm in size) toxicity through the use of a suitable model is an important goal if nanoparticles are to be regulated in terms of exposures and toxicological effects. Recently, a model to predict toxicity of nanoparticles with high aspect ratio has been put forward by a consortium of scientists. The High aspect ratio nanoparticles (HARN) model is a platform that relates the physical dimensions of HARN (specifically length and diameter ratio) and biopersistence to their toxicity in biological environments. Potentially, this model is of great public health and economic importance, as it can be used as a tool to not only predict toxicological activity but can be used to classify the toxicity of various fibrous nanoparticles, without the need to carry out time-consuming and expensive toxicology studies. However, this model of toxicity is currently hypothetical in nature and is based solely on drawing similarities in its dimensional geometry with that of asbestos and synthetic vitreous fibres. The aim of this review is two-fold: (a) to present findings from past literature, on the physicochemical property and pathogenicity bioassay testing of HARN (b) to identify some of the challenges and future research steps crucial before the HARN model can be accepted as a predictive model. By presenting what has been done, we are able to identify scientific challenges and research directions that are needed for the HARN model to gain public acceptance. Our recommendations for future research includes the need to: (a) accurately link physicochemical data with corresponding pathogenicity assay data, through the use of suitable reference standards and standardised protocols, (b) develop better tools/techniques for physicochemical characterisation, (c) to develop better ways of monitoring HARN in the workplace, (d) to reliably measure dose exposure levels, in order to support future epidemiological

  11. A hypothetical model for predicting the toxicity of high aspect ratio nanoparticles (HARN)

    International Nuclear Information System (INIS)

    Tran, C. L.; Tantra, R.; Donaldson, K.; Stone, V.; Hankin, S. M.; Ross, B.; Aitken, R. J.; Jones, A. D.

    2011-01-01

    The ability to predict nanoparticle (dimensional structures which are less than 100 nm in size) toxicity through the use of a suitable model is an important goal if nanoparticles are to be regulated in terms of exposures and toxicological effects. Recently, a model to predict toxicity of nanoparticles with high aspect ratio has been put forward by a consortium of scientists. The High aspect ratio nanoparticles (HARN) model is a platform that relates the physical dimensions of HARN (specifically length and diameter ratio) and biopersistence to their toxicity in biological environments. Potentially, this model is of great public health and economic importance, as it can be used as a tool to not only predict toxicological activity but can be used to classify the toxicity of various fibrous nanoparticles, without the need to carry out time-consuming and expensive toxicology studies. However, this model of toxicity is currently hypothetical in nature and is based solely on drawing similarities in its dimensional geometry with that of asbestos and synthetic vitreous fibres. The aim of this review is two-fold: (a) to present findings from past literature, on the physicochemical property and pathogenicity bioassay testing of HARN (b) to identify some of the challenges and future research steps crucial before the HARN model can be accepted as a predictive model. By presenting what has been done, we are able to identify scientific challenges and research directions that are needed for the HARN model to gain public acceptance. Our recommendations for future research includes the need to: (a) accurately link physicochemical data with corresponding pathogenicity assay data, through the use of suitable reference standards and standardised protocols, (b) develop better tools/techniques for physicochemical characterisation, (c) to develop better ways of monitoring HARN in the workplace, (d) to reliably measure dose exposure levels, in order to support future epidemiological

  12. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  13. Preliminary Empirical Models for Predicting Shrinkage, Part Geometry and Metallurgical Aspects of Ti-6Al-4V Shaped Metal Deposition Builds

    Science.gov (United States)

    Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith

    2011-12-01

    Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.

  14. Preliminary Empirical Models for Predicting Shrinkage, Part Geometry and Metallurgical Aspects of Ti-6Al-4V Shaped Metal Deposition Builds

    International Nuclear Information System (INIS)

    Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith

    2011-01-01

    Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.

  15. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  16. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Scott E. Field

    2014-07-01

    Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in

  17. Prediction of Critical Power and W' in Hypoxia: Application to Work-Balance Modelling.

    Science.gov (United States)

    Townsend, Nathan E; Nichols, David S; Skiba, Philip F; Racinais, Sebastien; Périard, Julien D

    2017-01-01

    Purpose: Develop a prediction equation for critical power (CP) and work above CP (W') in hypoxia for use in the work-balance ([Formula: see text]) model. Methods: Nine trained male cyclists completed cycling time trials (TT; 12, 7, and 3 min) to determine CP and W' at five altitudes (250, 1,250, 2,250, 3,250, and 4,250 m). Least squares regression was used to predict CP and W' at altitude. A high-intensity intermittent test (HIIT) was performed at 250 and 2,250 m. Actual and predicted CP and W' were used to compute W' during HIIT using differential ([Formula: see text]) and integral ([Formula: see text]) forms of the [Formula: see text] model. Results: CP decreased at altitude ( P equations for CP and W' developed in this study are suitable for use with the [Formula: see text] model in acute hypoxia. This enables the application of [Formula: see text] modelling to training prescription and competition analysis at altitude.

  18. THERMAL PHASE VARIATIONS OF WASP-12b: DEFYING PREDICTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, Nicolas B.; Shekhtman, Louis M. [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2131 Tech Dr, Evanston, IL 60208 (United States); Machalek, Pavel [SETI Institute, 189 Bernardo Ave., Suite 100, Mountain View, CA 94043 (United States); Croll, Bryce [Department of Astronomy and Astrophysics, University of Toronto, 50 George St., Toronto, ON, M5S 3H4 (Canada); Burrows, Adam [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 05844 (United States); Deming, Drake [Planetary Systems Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Greene, Tom [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Hora, Joseph L., E-mail: n-cowan@northwestern.edu [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States)

    2012-03-01

    We report Warm Spitzer full-orbit phase observations of WASP-12b at 3.6 and 4.5 {mu}m. This extremely inflated hot Jupiter is thought to be overflowing its Roche lobe, undergoing mass loss and accretion onto its host star, and has been claimed to have a C/O ratio in excess of unity. We are able to measure the transit depths, eclipse depths, thermal and ellipsoidal phase variations at both wavelengths. The large-amplitude phase variations, combined with the planet's previously measured dayside spectral energy distribution, are indicative of non-zero Bond albedo and very poor day-night heat redistribution. The transit depths in the mid-infrared-(R{sub p} /R{sub *}){sup 2} = 0.0123(3) and 0.0111(3) at 3.6 and 4.5 {mu}m, respectively-indicate that the atmospheric opacity is greater at 3.6 than at 4.5 {mu}m, in disagreement with model predictions, irrespective of C/O ratio. The secondary eclipse depths are consistent with previous studies: F{sub day}/F{sub *} = 0.0038(4) and 0.0039(3) at 3.6 and 4.5 {mu}m, respectively. We do not detect ellipsoidal variations at 3.6 {mu}m, but our parameter uncertainties-estimated via prayer-bead Monte Carlo-keep this non-detection consistent with model predictions. At 4.5 {mu}m, on the other hand, we detect ellipsoidal variations that are much stronger than predicted. If interpreted as a geometric effect due to the planet's elongated shape, these variations imply a 3:2 ratio for the planet's longest:shortest axes and a relatively bright day-night terminator. If we instead presume that the 4.5 {mu}m ellipsoidal variations are due to uncorrected systematic noise and we fix the amplitude of the variations to zero, the best-fit 4.5 {mu}m transit depth becomes commensurate with the 3.6 {mu}m depth, within the uncertainties. The relative transit depths are then consistent with a solar composition and short scale height at the terminator. Assuming zero ellipsoidal variations also yields a much deeper 4.5 {mu}m eclipse depth

  19. [Application of R-based multiple seasonal ARIMA model, in predicting the incidence of hand, foot and mouth disease in Shaanxi province].

    Science.gov (United States)

    Liu, F; Zhu, N; Qiu, L; Wang, J J; Wang, W H

    2016-08-10

    To apply the ' auto-regressive integrated moving average product seasonal model' in predicting the number of hand, foot and mouth disease in Shaanxi province. In Shaanxi province, the trend of hand, foot and mouth disease was analyzed and tested, under the use of R software, between January 2009 and June 2015. Multiple seasonal ARIMA model was then fitted under time series to predict the number of hand, foot and mouth disease in 2016 and 2017. Seasonal effect was seen in hand, foot and mouth disease in Shaanxi province. A multiple seasonal ARIMA (2,1,0)×(1,1,0)12 was established, with the equation as (1 -B)(1 -B12)Ln (Xt) =((1-1.000B)/(1-0.532B-0.363B(2))*(1-0.644B12-0.454B12(2)))*Epsilont. The mean of absolute error and the relative error were 531.535 and 0.114, respectively when compared to the simulated number of patients from Jun to Dec in 2015. RESULTS under the prediction of multiple seasonal ARIMA model showed that the numbers of patients in both 2016 and 2017 were similar to that of 2015 in Shaanxi province. Multiple seasonal ARIMA (2,1,0)×(1,1,0)12 model could be used to successfully predict the incidence of hand, foot and mouth disease in Shaanxi province.

  20. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  1. Cell-model prediction of the melting of a Lennard-Jones solid

    International Nuclear Information System (INIS)

    Holian, B.L.

    1980-01-01

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments

  2. Evaluation of the DayCent model to predict carbon fluxes in French crop sites

    Science.gov (United States)

    Fujisaki, Kenji; Martin, Manuel P.; Zhang, Yao; Bernoux, Martial; Chapuis-Lardy, Lydie

    2017-04-01

    Croplands in temperate regions are an important component of the carbon balance and can act as a sink or a source of carbon, depending on pedoclimatic conditions and management practices. Therefore the evaluation of carbon fluxes in croplands by modelling approach is relevant in the context of global change. This study was part of the Comete-Global project funded by the multi-Partner call FACCE JPI. Carbon fluxes, net ecosystem exchange (NEE), leaf area index (LAI), biomass, and grain production were simulated at the site level in three French crop experiments from the CarboEurope project. Several crops were studied, like winter wheat, rapeseed, barley, maize, and sunflower. Daily NEE was measured with eddy covariance and could be partitioned between gross primary production (GPP) and total ecosystem respiration (TER). Measurements were compared to DayCent simulations, a process-based model predicting plant production and soil organic matter turnover at daily time step. We compared two versions of the model: the original one with a simplified plant module and a newer version that simulates LAI. Input data for modelling were soil properties, climate, and management practices. Simulations of grain yields and biomass production were acceptable when using optimized crop parameters. Simulation of NEE was also acceptable. GPP predictions were improved with the newer version of the model, eliminating temporal shifts that could be observed with the original model. TER was underestimated by the model. Predicted NEE was more sensitive to soil tillage and nitrogen applications than measured NEE. DayCent was therefore a relevant tool to predict carbon fluxes in French crops at the site level. The introduction of LAI in the model improved its performance.

  3. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  4. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  5. One-pot, two-step synthesis of imidazo[1,2-a]benzimidazoles via a multicomponent [4 + 1] cycloaddition reaction.

    Science.gov (United States)

    Hsiao, Ya-Shan; Narhe, Bharat D; Chang, Ying-Sheng; Sun, Chung-Ming

    2013-10-14

    A one-pot, two-step synthesis of imidazo[1,2-a]benzimidazoles has been achieved by a three-component reaction of 2-aminobenzimidazoles with an aromatic aldehyde and an isocyanide. The reaction involving condensation of 2-aminobenzimidazole with an aldehyde is run under microwave activation to generate an imine intermediate under basic conditions which then undergoes [4 + 1] cycloaddition with an isocyanide.

  6. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  7. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  8. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  9. Predicting Groundwater Chlorine Concentration in Dezful Aquifer Using the Panel Data Model

    Directory of Open Access Journals (Sweden)

    Ghazaleh Hadighanavat

    2015-12-01

    Full Text Available Groundwater resources are of great importance in arid and semi-arid regions due to their ease of access and low extraction costs. Compared to studies conducted on the quantity of groundwater resources, less research has been devoted to groundwater qulity. The present study was thus designed and implemented to forecast groundwater chlorine variations in Dazful Plain in Khuzistan Province, Iran. " Panel data" is a regression model that considers variables of different units over time. In this study, it was exploitedfor the simultaneous prediction of groundwater quality in different wells. For this purpose, meteorological parameters such as rain and ET0 as well as the quality parameters including EC, sodium, calcium, and magnesium were collected in ten wells in the study area on a seasonal basis over a period of 8 years. In the next step, the data thus collected were subjected to different "panel data" regression models including Common Effects, Fixed Effects, and Random Effects. The results showed that the Random Effects Regression Model was best suited for predicting groundwater quality. Moreover, performance indicators (R2= 0.96, RMSE= 2.445 revealed the effectiveness of this method.

  10. A model to predict the permeation of type IV hydrogen tanks

    Energy Technology Data Exchange (ETDEWEB)

    Bayle, Julien; Perreux, Dominique; Chapelle, David; Thiebaud, Frederic [MaHyTec, Dole (France); Nardin, Philippe [Franche Comte Univ. (France)

    2010-07-01

    In the frame of the certification process of the type IV hydrogen storage tanks MaHyTec aims to manufacture, this innovative SME is developing a numerical model dedicated to the study of permeation issues. Such an approach aims at avoiding complicated, time-consuming and expensive testing. Experimental results obtained under real conditions can moreover be significantly influenced by the scattering of material properties and liner dimensions. From simple testing on small-size flat membranes, the model allows to predict the gas diffusion flow through the whole structure by means of numerous parameters. On every step, theory can be compared with the results obtained from the samples. This document presents a brief review of the mathematical theory describing gas diffusion and the different aspects of the study for better understanding the proposed approach. (orig.)

  11. Evaluation of several two-step scoring functions based on linear interaction energy, effective ligand size, and empirical pair potentials for prediction of protein-ligand binding geometry and free energy.

    Science.gov (United States)

    Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S

    2011-09-26

    The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand

  12. Development of Shear Capacity Prediction Model for FRP-RC Beam without Web Reinforcement

    Directory of Open Access Journals (Sweden)

    Md. Arman Chowdhury

    2016-01-01

    Full Text Available Available codes and models generally use partially modified shear design equation, developed earlier for steel reinforced concrete, for predicting the shear capacity of FRP-RC members. Consequently, calculated shear capacity shows under- or overestimation. Furthermore, in most models some affecting parameters of shear strength are overlooked. In this study, a new and simplified shear capacity prediction model is proposed considering all the parameters. A large database containing 157 experimental results of FRP-RC beams without shear reinforcement is assembled from the published literature. A parametric study is then performed to verify the accuracy of the proposed model. Again, a comprehensive review of 9 codes and 12 available models is done, published back from 1997 to date for comparison with the proposed model. Hence, it is observed that the proposed equation shows overall optimized performance compared to all the codes and models within the range of used experimental dataset.

  13. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity.

    Science.gov (United States)

    Chen, Yumiao; Yang, Zhongliang

    2017-01-01

    Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.

  14. Clinical Decision Support Model to Predict Occlusal Force in Bruxism Patients.

    Science.gov (United States)

    Thanathornwong, Bhornsawan; Suebnukarn, Siriwan

    2017-10-01

    The aim of this study was to develop a decision support model for the prediction of occlusal force from the size and color of articulating paper markings in bruxism patients. We used the information from the datasets of 30 bruxism patients in which digital measurements of the size and color of articulating paper markings (12-µm Hanel; Coltene/Whaledent GmbH, Langenau, Germany) on canine protected hard stabilization splints were measured in pixels (P) and in red (R), green (G), and blue (B) values using Adobe Photoshop software (Adobe Systems, San Jose, CA, USA). The occlusal force (F) was measured using T-Scan III (Tekscan Inc., South Boston, MA, USA). The multiple regression equation was applied to predict F from the P and RGB. Model evaluation was performed using the datasets from 10 new patients. The patient's occlusal force measured by T-Scan III was used as a 'gold standard' to compare with the occlusal force predicted by the multiple regression model. The results demonstrate that the correlation between the occlusal force and the pixels and RGB of the articulating paper markings was positive (F = 1.62×P + 0.07×R -0.08×G + 0.08×B + 4.74; R 2 = 0.34). There was a high degree of agreement between the occlusal force of the patient measured using T-Scan III and the occlusal force predicted by the model (kappa value = 0.82). The results obtained demonstrate that the multiple regression model can predict the occlusal force using the digital values for the size and color of the articulating paper markings in bruxism patients.

  15. Clinical Decision Support Model to Predict Occlusal Force in Bruxism Patients

    Science.gov (United States)

    Thanathornwong, Bhornsawan

    2017-01-01

    Objectives The aim of this study was to develop a decision support model for the prediction of occlusal force from the size and color of articulating paper markings in bruxism patients. Methods We used the information from the datasets of 30 bruxism patients in which digital measurements of the size and color of articulating paper markings (12-µm Hanel; Coltene/Whaledent GmbH, Langenau, Germany) on canine protected hard stabilization splints were measured in pixels (P) and in red (R), green (G), and blue (B) values using Adobe Photoshop software (Adobe Systems, San Jose, CA, USA). The occlusal force (F) was measured using T-Scan III (Tekscan Inc., South Boston, MA, USA). The multiple regression equation was applied to predict F from the P and RGB. Model evaluation was performed using the datasets from 10 new patients. The patient's occlusal force measured by T-Scan III was used as a ‘gold standard’ to compare with the occlusal force predicted by the multiple regression model. Results The results demonstrate that the correlation between the occlusal force and the pixels and RGB of the articulating paper markings was positive (F = 1.62×P + 0.07×R –0.08×G + 0.08×B + 4.74; R2 = 0.34). There was a high degree of agreement between the occlusal force of the patient measured using T-Scan III and the occlusal force predicted by the model (kappa value = 0.82). Conclusions The results obtained demonstrate that the multiple regression model can predict the occlusal force using the digital values for the size and color of the articulating paper markings in bruxism patients. PMID:29181234

  16. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  17. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  18. Learning to Predict Chemical Reactions

    Science.gov (United States)

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  19. Hum-mPLoc 3.0: prediction enhancement of human protein subcellular localization through modeling the hidden correlations of gene ontology and functional domain features.

    Science.gov (United States)

    Zhou, Hang; Yang, Yang; Shen, Hong-Bin

    2017-03-15

    Protein subcellular localization prediction has been an important research topic in computational biology over the last decade. Various automatic methods have been proposed to predict locations for large scale protein datasets, where statistical machine learning algorithms are widely used for model construction. A key step in these predictors is encoding the amino acid sequences into feature vectors. Many studies have shown that features extracted from biological domains, such as gene ontology and functional domains, can be very useful for improving the prediction accuracy. However, domain knowledge usually results in redundant features and high-dimensional feature spaces, which may degenerate the performance of machine learning models. In this paper, we propose a new amino acid sequence-based human protein subcellular location prediction approach Hum-mPLoc 3.0, which covers 12 human subcellular localizations. The sequences are represented by multi-view complementary features, i.e. context vocabulary annotation-based gene ontology (GO) terms, peptide-based functional domains, and residue-based statistical features. To systematically reflect the structural hierarchy of the domain knowledge bases, we propose a novel feature representation protocol denoted as HCM (Hidden Correlation Modeling), which will create more compact and discriminative feature vectors by modeling the hidden correlations between annotation terms. Experimental results on four benchmark datasets show that HCM improves prediction accuracy by 5-11% and F 1 by 8-19% compared with conventional GO-based methods. A large-scale application of Hum-mPLoc 3.0 on the whole human proteome reveals proteins co-localization preferences in the cell. www.csbio.sjtu.edu.cn/bioinf/Hum-mPLoc3/. hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. Retaining K-12 Online Teachers: A Predictive Model for K-12 Online Teacher Turnover

    Science.gov (United States)

    Larkin, Ingle M.; Lokey-Vega, Anissa; Brantley-Dias, Laurie

    2018-01-01

    The purpose of this study was to measure and explore factors influencing K-12 online teachers' turnover intentions, with job satisfaction and organizational commitment serving as moderating variables. Using Fishbein and Ajzen's Theory of Reasoned Action and Planned Behavior (1975), this study was conducted in public, private, charter, for-profit,…

  1. Steps That Count: Physical Activity Recommendations, Brisk Walking, and Steps Per Minute-How Do They Relate?

    NARCIS (Netherlands)

    Pillay, J.; Kolbe-Alexander, T.L.; Proper, K.I.; van Mechelen, W.; Lambert, E.V.

    2014-01-01

    Background: Brisk walking is recommended as a form of health-enhancing physical activity. This study determines the steps/minute rate corresponding to self-paced brisk walking (SPBW); a predicted steps/minute rate for moderate physical activity (MPA) and a comparison of the 2 findings. Methods: A

  2. Development of a disease risk prediction model for downy mildew (Peronospora sparsa) in boysenberry.

    Science.gov (United States)

    Kim, Kwang Soo; Beresford, Robert M; Walter, Monika

    2014-01-01

    Downy mildew caused by Peronospora sparsa has resulted in serious production losses in boysenberry (Rubus hybrid), blackberry (Rubus fruticosus), and rose (Rosa sp.) in New Zealand, Mexico, and the United States and the United Kingdom, respectively. Development of a model to predict downy mildew risk would facilitate development and implementation of a disease warning system for efficient fungicide spray application in the crops affected by this disease. Because detailed disease observation data were not available, a two-step approach was applied to develop an empirical risk prediction model for P. sparsa. To identify the weather patterns associated with a high incidence of downy mildew berry infections (dryberry disease) and derive parameters for the empirical model, classification and regression tree (CART) analysis was performed. Then, fuzzy sets were applied to develop a simple model to predict the disease risk based on the parameters derived from the CART analysis. High-risk seasons with a boysenberry downy mildew incidence >10% coincided with months when the number of hours per day with temperature of 15 to 20°C averaged >9.8 over the month and the number of days with rainfall in the month was >38.7%. The Fuzzy Peronospora Sparsa (FPS) model, developed using fuzzy sets, defined relationships among high-risk events, temperature, and rainfall conditions. In a validation study, the FPS model provided correct identification of both seasons with high downy mildew risk for boysenberry, blackberry, and rose and low risk in seasons when no disease was observed. As a result, the FPS model had a significant degree of agreement between predicted and observed risks of downy mildew for those crops (P = 0.002).

  3. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  4. Minimum Performance on Clinical Tests of Physical Function to Predict Walking 6,000 Steps/Day in Knee Osteoarthritis: An Observational Study.

    Science.gov (United States)

    Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K

    2018-07-01

    Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.

  5. Genomic prediction for Nordic Red Cattle using one-step and selection index blending

    DEFF Research Database (Denmark)

    Guosheng, Su; Madsen, Per; Nielsen, Ulrik Sander

    2012-01-01

    This study investigated the accuracy of direct genomic breeding values (DGV) using a genomic BLUP model, genomic enhanced breeding values (GEBV) using a one-step blending approach, and GEBV using a selection index blending approach for 15 traits of Nordic Red Cattle. The data comprised 6,631 bulls...... genotyped and nongenotyped bulls for one-step blending, and to scale DGV and its expected reliability in the selection index blending. Weighting (scaling) factors had a small influence on reliabilities of GEBV, but a large influence on the variation of GEBV. Based on the validation analyses, averaged over...... the 15 traits, the reliability of DGV for bulls without daughter records was 11.0 percentage points higher than the reliability of conventional pedigree index. Further gain of 0.9 percentage points was achieved by combining information from conventional pedigree index using the selection index blending...

  6. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  7. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  8. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  9. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  10. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  11. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  12. Influence of step length and landing pattern on patellofemoral joint kinetics during running.

    Science.gov (United States)

    Willson, J D; Ratcliff, O M; Meardon, S A; Willy, R W

    2015-12-01

    Elevated patellofemoral joint kinetics during running may contribute to patellofemoral joint symptoms. The purpose of this study was to test for independent effects of foot strike pattern and step length on patellofemoral joint kinetics while running. Effects were tested relative to individual steps and also taking into account the number of steps required to run a kilometer with each step length. Patellofemoral joint reaction force and stress were estimated in 20 participants running at their preferred speed. Participants ran using a forefoot strike and rearfoot strike pattern during three different step length conditions: preferred step length, long (+10%) step length, and short (-10%) step length. Patellofemoral kinetics was estimated using a biomechanical model of the patellofemoral joint that accounted for cocontraction of the knee flexors and extensors. We observed independent effects of foot strike pattern and step length. Patellofemoral joint kinetics per step was 10-13% less during forefoot strike conditions and 15-20% less with a shortened step length. Patellofemoral joint kinetics per kilometer decreased 12-13% using a forefoot strike pattern and 9-12% with a shortened step length. To the extent that patellofemoral joint kinetics contribute to symptoms among runners, these running modifications may be advisable for runners with patellofemoral pain. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Microfluidic step-emulsification in axisymmetric geometry.

    Science.gov (United States)

    Chakraborty, I; Ricouvier, J; Yazhgur, P; Tabeling, P; Leshansky, A M

    2017-10-25

    Biphasic step-emulsification (Z. Li et al., Lab Chip, 2015, 15, 1023) is a promising microfluidic technique for high-throughput production of μm and sub-μm highly monodisperse droplets. The step-emulsifier consists of a shallow (Hele-Shaw) microchannel operating with two co-flowing immiscible liquids and an abrupt expansion (i.e., step) to a deep and wide reservoir. Under certain conditions the confined stream of the disperse phase, engulfed by the co-flowing continuous phase, breaks into small highly monodisperse droplets at the step. Theoretical investigation of the corresponding hydrodynamics is complicated due to the complex geometry of the planar device, calling for numerical approaches. However, direct numerical simulations of the three dimensional surface-tension-dominated biphasic flows in confined geometries are computationally expensive. In the present paper we study a model problem of axisymmetric step-emulsification. This setup consists of a stable core-annular biphasic flow in a cylindrical capillary tube connected co-axially to a reservoir tube of a larger diameter through a sudden expansion mimicking the edge of the planar step-emulsifier. We demonstrate that the axisymmetric setup exhibits similar regimes of droplet generation to the planar device. A detailed parametric study of the underlying hydrodynamics is feasible via inexpensive (two dimensional) simulations owing to the axial symmetry. The phase diagram quantifying the different regimes of droplet generation in terms of governing dimensionless parameters is presented. We show that in qualitative agreement with experiments in planar devices, the size of the droplets generated in the step-emulsification regime is independent of the capillary number and almost insensitive to the viscosity ratio. These findings confirm that the step-emulsification regime is solely controlled by surface tension. The numerical predictions are in excellent agreement with in-house experiments with the axisymmetric

  14. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  15. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  16. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    Science.gov (United States)

    Molnar, P.

    2012-04-01

    Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and

  17. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  18. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  19. Dynamic mechanistic modeling of the multienzymatic one-pot reduction of dehydrocholic acid to 12-keto ursodeoxycholic acid with competing substrates and cofactors.

    Science.gov (United States)

    Sun, Boqiao; Hartl, Florian; Castiglione, Kathrin; Weuster-Botz, Dirk

    2015-01-01

    Ursodeoxycholic acid (UDCA) is a bile acid which is used as pharmaceutical for the treatment of several diseases, such as cholesterol gallstones, primary sclerosing cholangitis or primary biliary cirrhosis. A potential chemoenzymatic synthesis route of UDCA comprises the two-step reduction of dehydrocholic acid to 12-keto-ursodeoxycholic acid (12-keto-UDCA), which can be conducted in a multienzymatic one-pot process using 3α-hydroxysteroid dehydrogenase (3α-HSDH), 7β-hydroxysteroid dehydrogenase (7β-HSDH), and glucose dehydrogenase (GDH) with glucose as cosubstrate for the regeneration of cofactor. Here, we present a dynamic mechanistic model of this one-pot reduction which involves three enzymes, four different bile acids, and two different cofactors, each with different oxidation states. In addition, every enzyme faces two competing substrates, whereas each bile acid and cofactor is formed or converted by two different enzymes. First, the kinetic mechanisms of both HSDH were identified to follow an ordered bi-bi mechanism with EBQ-type uncompetitive substrate inhibition. Rate equations were then derived for this mechanism and for mechanisms describing competing substrates. After the estimation of the model parameters of each enzyme independently by progress curve analyses, the full process model of a simple batch-process was established by coupling rate equations and mass balances. Validation experiments of the one-pot multienzymatic batch process revealed high prediction accuracy of the process model and a model analysis offered important insight to the identification of optimum reaction conditions. © 2015 American Institute of Chemical Engineers.

  20. Predictive risk modelling in the Spanish population: a cross-sectional study.

    Science.gov (United States)

    Orueta, Juan F; Nuño-Solinis, Roberto; Mateos, Maider; Vergara, Itziar; Grandes, Gonzalo; Esnaola, Santiago

    2013-07-09

    An increase in chronic conditions is currently the greatest threat to human health and to the sustainability of health systems. Risk adjustment systems may enable population stratification programmes to be developed and become instrumental in implementing new models of care.The objectives of this study are to evaluate the capability of ACG-PM, DCG-HCC and CRG-based models to predict healthcare costs and identify patients that will be high consumers and to analyse changes to predictive capacity when socio-economic variables are added. This cross-sectional study used data of all Basque Country citizens over 14 years of age (n = 1,964,337) collected in a period of 2 years. Data from the first 12 months (age, sex, area deprivation index, diagnoses, procedures, prescriptions and previous cost) were used to construct the explanatory variables. The ability of models to predict healthcare costs in the following 12 months was assessed using the coefficient of determination and to identify the patients with highest costs by means of receiver operating characteristic (ROC) curve analysis. The coefficients of determination ranged from 0.18 to 0.21 for diagnosis-based models, 0.17-0.18 for prescription-based and 0.21-0.24 for the combination of both. The observed area under the ROC curve was 0.78-0.86 (identifying patients with a cost higher than P-95) and 0.83-0.90 (P-99). The values of the DCG-HCC models are slightly higher and those of the CRG models are lower, although prescription information could not be used in the latter. On adding previous cost data, differences between the three systems decrease appreciably. Inclusion of the deprivation index led to only marginal improvements in explanatory power. The case-mix systems developed in the USA can be useful in a publicly financed healthcare system with universal coverage to identify people at risk of high health resource consumption and whose situation is potentially preventable through proactive interventions.

  1. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  2. Predicting plant distribution in an heterogeneous Alpine landscape: does soil matter?

    Science.gov (United States)

    Buri, Aline; Cianfrani, Carmen; Pradervand, Jean-Nicolas; Guisan, Antoine

    2016-04-01

    Topographic and climatic factors are usually used to predict plant distribution because they are known to explain their presence or absence. Soil properties have been widely shown to influence plant growth and distributions. However, they are rarely taken into account as predictors of plant species distribution models (SDM) in an edaphically heterogeneous landscape. Or, when it happens, interpolation techniques are used to project soil factors in space. In heterogeneous landscape, such as in the Alps region, where soil properties change abruptly as a function of environmental conditions over short distances, interpolation techniques require a huge quantities of samples to be efficient. This is costly and time consuming, and bring more errors than predictive approach for an equivalent number of samples. In this study we aimed to assess whether soil proprieties may be generalized over entire mountainous geographic extents and can improve predictions of plant distributions over traditional topo-climatic predictors. First, we used a predictive approach to map two soil proprieties based on field measurements in the western Swiss Alps region; the soil pH and the ratio of stable isotopes 13C/12C (called δ13CSOM). We used ensemble forecasting techniques combining together several predictive algorithms to build models of the geographic variation in the values of both soil proprieties and projected them in the entire study area. As predictive factors, we employed very high resolution topo-climatic data. In a second step, output maps from the previous task were used as an input for vegetation regional models. We integrated the predicted soil proprieties to a set of basic topo-climatic predictors known to be important to model plants species. Then we modelled the distribution of 156 plant species inhabiting the study area. Finally, we compared the quality of the models having or not soil proprieties as predictors to evaluate their effect on the predictive power of our models

  3. Percutaneous Cystgastrostomy as a Single-Step Procedure

    International Nuclear Information System (INIS)

    Curry, L.; Sookur, P.; Low, D.; Bhattacharya, S.; Fotheringham, T.

    2009-01-01

    The purpose of this study was to evaluate the success of percutaneous transgastric cystgastrostomy as a single-step procedure. We performed a retrospective analysis of single-step percutaneous transgastric cystgastrostomy carried out in 12 patients (8 male, 4 female; mean age 44 years; range 21-70 years), between 2002 and 2007, with large symptomatic pancreatic pseudocysts for whom up to 1-year follow-up data (mean 10 months) were available. All pseudocysts were drained by single-step percutaneous cystgastrostomy with the placement of either one or two stents. The procedure was completed successfully in all 12 patients. The pseudocysts showed complete resolution on further imaging in 7 of 12 patients with either enteric passage of the stent or stent removal by endoscopy. In 2 of 12 patients, the pseudocysts showed complete resolution on imaging, with the stents still noted in situ. In 2 of 12 patients, the pseudocysts became infected after 1 month and required surgical intervention. In 1 of 12 patients, the pseudocyst showed partial resolution on imaging, but subsequently reaccumulated and later required external drainage. In our experience, percutaneous cystgastrostomy as a single-step procedure has a high success rate and good short-term outcomes over 1-year follow-up and should be considered in the treatment of large symptomatic cysts.

  4. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  5. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  6. Identifying a predictive model for response to atypical antipsychotic monotherapy treatment in south Indian schizophrenia patients.

    Science.gov (United States)

    Gupta, Meenal; Moily, Nagaraj S; Kaur, Harpreet; Jajodia, Ajay; Jain, Sanjeev; Kukreti, Ritushree

    2013-08-01

    Atypical antipsychotic (AAP) drugs are the preferred choice of treatment for schizophrenia patients. Patients who do not show favorable response to AAP monotherapy are subjected to random prolonged therapeutic treatment with AAP multitherapy, typical antipsychotics or a combination of both. Therefore, prior identification of patients' response to drugs can be an important step in providing efficacious and safe therapeutic treatment. We thus attempted to elucidate a genetic signature which could predict patients' response to AAP monotherapy. Our logistic regression analyses indicated the probability that 76% patients carrying combination of four SNPs will not show favorable response to AAP therapy. The robustness of this prediction model was assessed using repeated 10-fold cross validation method, and the results across n-fold cross-validations (mean accuracy=71.91%; 95%CI=71.47-72.35) suggest high accuracy and reliability of the prediction model. Further validations of these results in large sample sets are likely to establish their clinical applicability. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Z-phase in 9-12% Cr Steels

    DEFF Research Database (Denmark)

    Danielsen, Hilmar; Hald, John

    2004-01-01

    The complex nitride Z-phase, Cr(V,Nb)N, has recently been identified as a major cause for premature breakdown in creep strength of a number of new 9-12%Cr martensitic steels. A thermodynamic model of the Z-phase has been created based on the Thermo-Calc software. The model predicts the Z-phase to......The complex nitride Z-phase, Cr(V,Nb)N, has recently been identified as a major cause for premature breakdown in creep strength of a number of new 9-12%Cr martensitic steels. A thermodynamic model of the Z-phase has been created based on the Thermo-Calc software. The model predicts the Z......-phase to be stable in all of the new 9-12%Cr martensitic steels. This has generally been confirmed by the performed experiments. Z-phase precipitation seems to be a kinetic problem, and drivning force calculations using Thermo-Calc with the developed model have been used to predict steel compositions, which...

  8. Assessment of data-assisted prediction by inclusion of crosslinking/mass-spectrometry and small angle X-ray scattering data in the 12th Critical Assessment of protein Structure Prediction experiment.

    Science.gov (United States)

    Tamò, Giorgio E; Abriata, Luciano A; Fonti, Giulia; Dal Peraro, Matteo

    2018-03-01

    Integrative modeling approaches attempt to combine experiments and computation to derive structure-function relationships in complex molecular assemblies. Despite their importance for the advancement of life sciences, benchmarking of existing methodologies is rather poor. The 12 th round of the Critical Assessment of protein Structure Prediction (CASP) offered a unique niche to benchmark data and methods from two kinds of experiments often used in integrative modeling, namely residue-residue contacts obtained through crosslinking/mass-spectrometry (CLMS), and small-angle X-ray scattering (SAXS) experiments. Upon assessment of the models submitted by predictors for 3 targets assisted by CLMS data and 11 targets by SAXS data, we observed no significant improvement when compared to the best data-blind models, although most predictors did improve relative to their own data-blind predictions. Only for target Tx892 of the CLMS-assisted category and for target Ts947 of the SAXS-assisted category, there was a net, albeit mild, improvement relative to the best data-blind predictions. We discuss here possible reasons for the relatively poor success, which point rather to inconsistencies in the data sources rather than in the methods, to which a few groups were less sensitive. We conclude with suggestions that could improve the potential of data integration in future CASP rounds in terms of experimental data production, methods development, data management and prediction assessment. © 2017 Wiley Periodicals, Inc.

  9. [Application of predictive model to estimate concentrations of chemical substances in the work environment].

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata

    2010-01-01

    Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.

  10. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  11. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. A 3-step framework for understanding the added value of surface soil moisture measurements for large-scale runoff prediction via data assimilation - a synthetic study in the Arkansas-Red River basin

    Science.gov (United States)

    Mao, Y.; Crow, W. T.; Nijssen, B.

    2017-12-01

    Soil moisture (SM) plays an important role in runoff generation both by partitioning infiltration and surface runoff during rainfall events and by controlling the rate of subsurface flow during inter-storm periods. Therefore, more accurate SM state estimation in hydrologic models is potentially beneficial for streamflow prediction. Various previous studies have explored the potential of assimilating SM data into hydrologic models for streamflow improvement. These studies have drawn inconsistent conclusions, ranging from significantly improved runoff via SM data assimilation (DA) to limited or degraded runoff. These studies commonly treat the whole assimilation procedure as a black box without separating the contribution of each step in the procedure, making it difficult to attribute the underlying causes of runoff improvement (or the lack thereof). In this study, we decompose the overall DA process into three steps by answering the following questions (3-step framework): 1) how much can assimilation of surface SM measurements improve surface SM state in a hydrologic model? 2) how much does surface SM improvement propagate to deeper layers? 3) How much does (surface and deeper-layer) SM improvement propagate into runoff improvement? A synthetic twin experiment is carried out in the Arkansas-Red River basin ( 600,000 km2) where a synthetic "truth" run, an open-loop run (without DA) and a DA run (where synthetic surface SM measurements are assimilated) are generated. All model runs are performed at 1/8 degree resolution and over a 10-year period using the Variable Infiltration Capacity (VIC) hydrologic model at a 3-hourly time step. For the DA run, the ensemble Kalman filter (EnKF) method is applied. The updated surface and deeper-layer SM states with DA are compared to the open-loop SM to quantitatively evaluate the first two steps in the framework. To quantify the third step, a set of perfect-state runs are generated where the "true" SM states are directly inserted

  13. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  14. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    Science.gov (United States)

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  15. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.

    2018-01-04

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  16. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.; Hura, Greg L.; Belsom, Adam; Burnett, Kathryn H.; Kryshtafovych, Andriy; Tainer, John A.; Rappsilber, Juri; Tsutakawa, Susan E.; Fidelis, Krzysztof

    2018-01-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  17. Validation of Quantitative Structure-Activity Relationship (QSAR Model for Photosensitizer Activity Prediction

    Directory of Open Access Journals (Sweden)

    Sharifuddin M. Zain

    2011-11-01

    Full Text Available Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA method. Based on the method, r2 value, r2 (CV value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 µM to 7.04 µM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  18. Coordination of push-off and collision determine the mechanical work of step-to-step transitions when isolated from human walking.

    Science.gov (United States)

    Soo, Caroline H; Donelan, J Maxwell

    2012-02-01

    In human walking, each transition to a new stance limb requires redirection of the center of mass (COM) velocity from one inverted pendulum arc to the next. While this can be accomplished with either negative collision work by the leading limb, positive push-off work by the trailing limb, or some combination of the two, physics-based models of step-to-step transitions predict that total positive work is minimized when the push-off and collision work are equal in magnitude. Here, we tested the importance of the coordination of push-off and collision work in determining transition work using ankle and knee joint braces to limit the ability of a leg to perform positive work on the body. To isolate transitions from other contributors to walking mechanics, participants were instructed to rock back and forth from one leg to the other, restricting motion to the sagittal plane and eliminating the need to swing the legs. We found that reduced push-off work increased the collision work required to complete the redirection of the COM velocity during each transition. A greater amount of total mechanical work was required when rocking departed from the predicted optimal coordination of step-to-step transitions, in which push-off and collision work are equal in magnitude. Our finding that transition work increases if one or both legs do not push-off with the optimal coordination may help explain the elevated metabolic cost of pathological gait irrespective of etiology. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Novel Two-Step Classifier for Torsades de Pointes Risk Stratification from Direct Features

    Directory of Open Access Journals (Sweden)

    Jaimit Parikh

    2017-11-01

    Full Text Available While pre-clinical Torsades de Pointes (TdP risk classifiers had initially been based on drug-induced block of hERG potassium channels, it is now well established that improved risk prediction can be achieved by considering block of non-hERG ion channels. The current multi-channel TdP classifiers can be categorized into two classes. First, the classifiers that take as input the values of drug-induced block of ion channels (direct features. Second, the classifiers that are built on features extracted from output of the drug-induced multi-channel blockage simulations in the in-silico models (derived features. The classifiers built on derived features have thus far not consistently provided increased prediction accuracies, and hence casts doubt on the value of such approaches given the cost of including biophysical detail. Here, we propose a new two-step method for TdP risk classification, referred to as Multi-Channel Blockage at Early After Depolarization (MCB@EAD. In the first step, we classified the compound that produced insufficient hERG block as non-torsadogenic. In the second step, the role of non-hERG channels to modulate TdP risk are considered by constructing classifiers based on direct or derived features at critical hERG block concentrations that generates EADs in the computational cardiac cell models. MCB@EAD provides comparable or superior TdP risk classification of the drugs from the direct features in tests against published methods. TdP risk for the drugs highly correlated to the propensity to generate EADs in the model. However, the derived features of the biophysical models did not improve the predictive capability for TdP risk assessment.

  20. A model for predicting the lifetimes of Grade-2 titanium nuclear waste containers

    Energy Technology Data Exchange (ETDEWEB)

    Shoesmith, D W; Ikeda, B M; Bailey, M G; Quinn, M J; LeNeveu, D M [Atomic Energy of Canada Ltd., Pinawa, MB (Canada). Whiteshell Labs.

    1995-08-01

    The development of a model to predict the lifetimes of Grade-2 titanium containers for nuclear fuel waste is described. This model assumes that the corrosion processes most likely to lead to container failure are crevice corrosion, hydrogen-induced cracking and general corrosion. Because of the expected evolution of waste vault conditions from initially warm (<{approx} 100 deg C) and oxidizing to eventually cool (<30 deg C) and non-oxiding, the period for which crevice corrosion can propagate will be limited by repassivation, and long container lifetimes will be achieved since the rate of general corrosion is extremely low. However, in the model presented, not only is it assumed that crevices will initiate rapidly on all containers, but also that the propagation of these crevices will continue indefinitely since conditions will remain sufficiently oxiding for repassivation to be avoided. The mathematical development of the model is described in detail. A simple ramped distribution is used to describe the failures due to the presence of initial defects. For crevice corrosion the propagation rates are assumed to be normally distributed and to be determined predominantly by temperature. The temperature dependence of the crevice propagation rate is determined from the calculated cooling profiles for the containers and an experimentally determined Arrhenius relationship for crevice propagation rates. The cooling profiles are approximated by double or single step functions, depending on the location of the container within the vault. The experimental data upon which this model is based is extensively reviewed. This review includes descriptions of the available data to describe and quantify the processes of general corrosion, crevice corrosion and hydrogen-induced cracking. For crevice corrosion and hydrogen-induced cracking the results of studies on both Grades-2 and -12 are presented. Also, the effects of impurities in the Grade-2 material are discussed.

  1. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  2. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  3. Coupling between cracking and permeability, a model for structure service life prediction

    International Nuclear Information System (INIS)

    Lasne, M.; Gerard, B.; Breysse, D.

    1993-01-01

    Many authors have chosen permeability coefficients (permeation, diffusion) as a reference for material durability and for structure service life prediction. When we look for designing engineered barriers for radioactive waste storage we find these macroscopic parameters very essential. In order to work with a predictive model of transfer properties evolution in a porous media (concrete, mortar, rock) we introduce a 'micro-macro' hierarchical model of permeability whose data are the total porosity and the pore size distribution. In spite of the simplicity of the model (very small CPU time consuming) comparative studies show predictive results for sound cement pastes, mortars and concretes. Associated to these works we apply a model of damage due to hydration processes at early ages to a container as a preliminary underproject for the definitive storage of Low Level radioactive Waste (LLW). Data are geometry, cement properties and damage measurement of concrete. This model takes into account the mechanical property of the concrete maturation (volumic variations during cement hydration can damage the structures). Some local microcracking can appear and affect the long term durability. Following these works we introduce our research program for the concrete cracking analysis. An experimental campaign is designed in order to determine damage-cracking-porosity-permeability coupling. (authors). 12 figs., 16 refs

  4. Hybrid model predictive control of a residential HVAC system with on-site thermal energy generation and storage

    International Nuclear Information System (INIS)

    Fiorentini, Massimo; Wall, Josh; Ma, Zhenjun; Braslavsky, Julio H.; Cooper, Paul

    2017-01-01

    Highlights: • A comprehensive approach to managing thermal energy in residential buildings. • Solar-assisted HVAC system with on-site energy generation and storage. • Mixed logic-dynamical building model identified using experimental data. • Design and implementation of a logic-dynamical model predictive control strategy. • MPC applied to the Net-Zero Energy house winner of the Solar Decathlon China 2013. - Abstract: This paper describes the development, implementation and experimental investigation of a Hybrid Model Predictive Control (HMPC) strategy to control solar-assisted heating, ventilation and air-conditioning (HVAC) systems with on-site thermal energy generation and storage. A comprehensive approach to the thermal energy management of a residential building is presented to optimise the scheduling of the available thermal energy resources to meet a comfort objective. The system has a hybrid nature with both continuous variables and discrete, logic-driven operating modes. The proposed control strategy is organized in two hierarchical levels. At the high-level, an HMPC controller with a 24-h prediction horizon and a 1-h control step is used to select the operating mode of the HVAC system. At the low-level, each operating mode is optimised using a 1-h rolling prediction horizon with a 5-min control step. The proposed control strategy has been practically implemented on the Building Management and Control System (BMCS) of a Net Zero-Energy Solar Decathlon house. This house features a sophisticated HVAC system comprising of an air-based photovoltaic thermal (PVT) collector and a phase change material (PCM) thermal storage integrated with the air-handling unit (AHU) of a ducted reverse-cycle heat pump system. The simulation and experimental results demonstrated the high performance achievable using an HMPC approach to optimising complex multimode HVAC systems in residential buildings, illustrating efficient selection of the appropriate operating modes

  5. Preoperative predictive model of recovery of urinary continence after radical prostatectomy

    Science.gov (United States)

    Matsushita, Kazuhito; Kent, Matthew T.; Vickers, Andrew J.; von Bodman, Christian; Bernstein, Melanie; Touijer, Karim A.; Coleman, Jonathan; Laudone, Vincent; Scardino, Peter T.; Eastham, James A.; Akin, Oguz; Sandhu, Jaspreet S.

    2016-01-01

    Objective ● To build a predictive model of urinary continence recovery following radical prostatectomy that incorporates magnetic resonance imaging parameters and clinical data. Patients and Methods ● We conducted a retrospective review of data from 2,849 patients who underwent pelvic staging magnetic resonance imaging prior to radical prostatectomy from November 2001 to June 2010. ● We used logistic regression to evaluate the association between each MRI variable and continence at 6 or 12 months, adjusting for age, body mass index (BMI), and American Society of Anesthesiologists (ASA) score and then used multivariable logistic regression to create our model. ● A nomogram was constructed using the multivariable logistic regression models. Results ● In total, 68% (n=1,742/2,559) and 82% (n=2,205/2,689) regained function at 6 and 12 months, respectively. ● In the base model, age, BMI, and ASA score were significant predictors of continence at 6 or 12 months on univariate analysis (p <0.005). ● Among the preoperative magnetic resonance imaging measurements, membranous urethral length, which showed great significance, was incorporated into the base model to create the full model. ● For continence recovery at 6 months, the addition of membranous urethral length increased the AUC to 0.664 for the validation set, an increase of 0.064 over the base model. For continence recovery at 12 months, the AUC was 0.674, an increase of 0.085 over the base model. Conclusions ● Using our model, the likelihood of continence recovery increases with membranous urethral length and decreases with age, body mass index, and ASA score. ● This model could be used for patient counseling and for the identification of patients at high risk for urinary incontinence in whom to study changes in operative technique that improve urinary function after radical prostatectomy. PMID:25682782

  6. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  7. Application of a Predictive Growth Model of Pseudomonas spp. for Estimating Shelf Life of Fresh Agaricus bisporus.

    Science.gov (United States)

    Wang, Jianming; Chen, Junran; Hu, Yunfeng; Hu, Hanyan; Liu, Guohua; Yan, Ruixiang

    2017-10-01

    For prediction of the shelf life of the mushroom Agaricus bisporus, the growth curve of the main spoilage microorganisms was studied under isothermal conditions at 2 to 22°C with a modified Gompertz model. The effect of temperature on the growth parameters for the main spoilage microorganisms was quantified and modeled using the square root model. Pseudomonas spp. were the main microorganisms causing A. bisporus decay, and the modified Gompertz model was useful for modelling the growth curve of Pseudomonas spp. All the bias factors values of the model were close to 1. By combining the modified Gompertz model with the square root model, a prediction model to estimate the shelf life of A. bisporus as a function of storage temperature was developed. The model was validated for A. bisporus stored at 6, 12, and 18°C, and adequate agreement was found between the experimental and predicted data.

  8. A Validated Prediction Model for Overall Survival From Stage III Non-Small Cell Lung Cancer: Toward Survival Prediction for Individual Patients

    Energy Technology Data Exchange (ETDEWEB)

    Oberije, Cary, E-mail: cary.oberije@maastro.nl [Radiation Oncology, Research Institute GROW of Oncology, Maastricht University Medical Center, Maastricht (Netherlands); De Ruysscher, Dirk [Radiation Oncology, Research Institute GROW of Oncology, Maastricht University Medical Center, Maastricht (Netherlands); Universitaire Ziekenhuizen Leuven, KU Leuven (Belgium); Houben, Ruud [Radiation Oncology, Research Institute GROW of Oncology, Maastricht University Medical Center, Maastricht (Netherlands); Heuvel, Michel van de; Uyterlinde, Wilma [Department of Thoracic Oncology, Netherlands Cancer Institute, Amsterdam (Netherlands); Deasy, Joseph O. [Memorial Sloan Kettering Cancer Center, New York (United States); Belderbos, Jose [Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam (Netherlands); Dingemans, Anne-Marie C. [Department of Pulmonology, University Hospital Maastricht, Research Institute GROW of Oncology, Maastricht (Netherlands); Rimner, Andreas; Din, Shaun [Memorial Sloan Kettering Cancer Center, New York (United States); Lambin, Philippe [Radiation Oncology, Research Institute GROW of Oncology, Maastricht University Medical Center, Maastricht (Netherlands)

    2015-07-15

    Purpose: Although patients with stage III non-small cell lung cancer (NSCLC) are homogeneous according to the TNM staging system, they form a heterogeneous group, which is reflected in the survival outcome. The increasing amount of information for an individual patient and the growing number of treatment options facilitate personalized treatment, but they also complicate treatment decision making. Decision support systems (DSS), which provide individualized prognostic information, can overcome this but are currently lacking. A DSS for stage III NSCLC requires the development and integration of multiple models. The current study takes the first step in this process by developing and validating a model that can provide physicians with a survival probability for an individual NSCLC patient. Methods and Materials: Data from 548 patients with stage III NSCLC were available to enable the development of a prediction model, using stratified Cox regression. Variables were selected by using a bootstrap procedure. Performance of the model was expressed as the c statistic, assessed internally and on 2 external data sets (n=174 and n=130). Results: The final multivariate model, stratified for treatment, consisted of age, gender, World Health Organization performance status, overall treatment time, equivalent radiation dose, number of positive lymph node stations, and gross tumor volume. The bootstrapped c statistic was 0.62. The model could identify risk groups in external data sets. Nomograms were constructed to predict an individual patient's survival probability ( (www.predictcancer.org)). The data set can be downloaded at (https://www.cancerdata.org/10.1016/j.ijrobp.2015.02.048). Conclusions: The prediction model for overall survival of patients with stage III NSCLC highlights the importance of combining patient, clinical, and treatment variables. Nomograms were developed and validated. This tool could be used as a first building block for a decision support system.

  9. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  10. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-01-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the open-quotes standardclose quotes κ-ε transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels

  11. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    Energy Technology Data Exchange (ETDEWEB)

    Banas, A.O.; Carver, M.B. [Chalk River Laboratories (Canada); Unrau, D. [Univ. of Toronto (Canada)

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  12. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  13. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  14. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    Science.gov (United States)

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  15. A stepped leader model for lightning including charge distribution in branched channels

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)

    2014-09-14

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  16. A stepped leader model for lightning including charge distribution in branched channels

    International Nuclear Information System (INIS)

    Shi, Wei; Zhang, Li; Li, Qingmin

    2014-01-01

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  17. The Value of Step-by-Step Risk Assessment for Unmanned Aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    The new European legislation expected in 2018 or 2019 will introduce a step-by-step process for conducting risk assessments for unmanned aircraft flight operations. This is a relatively simple approach to a very complex challenge. This work compares this step-by-step process to high fidelity risk...... modeling, and shows that at least for a series of example flight missions there is reasonable agreement between the two very different methods....

  18. Uncertainty in prediction and simulation of flow in sewer systems

    DEFF Research Database (Denmark)

    Breinholt, Anders

    the uncertainty in the state variables. Additionally the observation noise is accounted for by a separate observation noise term. This approach is also referred to as stochastic grey-box modelling. A state dependent diffusion term was developed using a Lamperti transformation of the states, and implemented...... performance beyond the one-step. The reliability was satisfied for the one-step prediction but were increasingly biased as the prediction horizon was expanded, particularly in rainy periods. GLUE was applied for estimating uncertainty in such a way that the selection of behavioral parameter sets continued....... Conversely the parameter estimates of the stochastic approach are physically meaningful. This thesis has contributed to developing simplified rainfall-runoff models that are suitable for model predictive control of urban drainage systems that takes uncertainty into account....

  19. An IL28B genotype-based clinical prediction model for treatment of chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Thomas R O'Brien

    Full Text Available Genetic variation in IL28B and other factors are associated with sustained virological response (SVR after pegylated-interferon/ribavirin treatment for chronic hepatitis C (CHC. Using data from the HALT-C Trial, we developed a model to predict a patient's probability of SVR based on IL28B genotype and clinical variables.HALT-C enrolled patients with advanced CHC who had failed previous interferon-based treatment. Subjects were re-treated with pegylated-interferon/ribavirin during trial lead-in. We used step-wise logistic regression to calculate adjusted odds ratios (aOR and create the predictive model. Leave-one-out cross-validation was used to predict a priori probabilities of SVR and determine area under the receiver operator characteristics curve (AUC.Among 646 HCV genotype 1-infected European American patients, 14.2% achieved SVR. IL28B rs12979860-CC genotype was the strongest predictor of SVR (aOR, 7.56; p10% (43.3% of subjects had an SVR rate of 27.9% and accounted for 84.8% of subjects actually achieving SVR. To verify that consideration of both IL28B genotype and clinical variables is required for treatment decisions, we calculated AUC values from published data for the IDEAL Study.A clinical prediction model based on IL28B genotype and clinical variables can yield useful individualized predictions of the probability of treatment success that could increase SVR rates and decrease the frequency of futile treatment among patients with CHC.

  20. Testing the utility of three social-cognitive models for predicting objective and self-report physical activity in adults with type 2 diabetes.

    Science.gov (United States)

    Plotnikoff, Ronald C; Lubans, David R; Penfold, Chris M; Courneya, Kerry S

    2014-05-01

    Theory-based interventions to promote physical activity (PA) are more effective than atheoretical approaches; however, the comparative utility of theoretical models is rarely tested in longitudinal designs with multiple time points. Further, there is limited research that has simultaneously tested social-cognitive models with self-report and objective PA measures. The primary aim of this study was to test the predictive ability of three theoretical models (social cognitive theory, theory of planned behaviour, and protection motivation theory) in explaining PA behaviour. Participants were adults with type 2 diabetes (n = 287, 53.8% males, mean age = 61.6 ± 11.8 years). Theoretical constructs across the three theories were tested to prospectively predict PA behaviour (objective and self-report) across three 6-month time intervals (baseline-6, 6-12, 12-18 months) using structural equation modelling. PA outcomes were steps/3 days (objective) and minutes of MET-weighted PA/week (self-report). The mean proportion of variance in PA explained by these models was 6.5% for objective PA and 8.8% for self-report PA. Direct pathways to PA outcomes were stronger for self-report compared with objective PA. These theories explained a small proportion of the variance in longitudinal PA studies. Theory development to guide interventions for increasing and maintaining PA in adults with type 2 diabetes requires further research with objective measures. Theory integration across social-cognitive models and the inclusion of ecological levels are recommended to further explain PA behaviour change in this population. Statement of contribution What is already known on this subject? Social-cognitive theories are able to explain partial variance for physical activity (PA) behaviour. What does this study add? The testing of three theories in a longitudinal design over 3, 6-month time intervals. The parallel use and comparison of both objective and self-report PA measures in testing these

  1. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

    KAUST Repository

    Kouramas, K.I.

    2011-08-01

    This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. © 2011 Elsevier Ltd. All rights reserved.

  2. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2013-01-14

    The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

  3. Predictive model for serious bacterial infections among infants younger than 3 months of age.

    Science.gov (United States)

    Bachur, R G; Harper, M B

    2001-08-01

    To develop a data-derived model for predicting serious bacterial infection (SBI) among febrile infants /=38.0 degrees C seen in an urban emergency department (ED) were retrospectively identified. SBI was defined as a positive culture of urine, blood, or cerebrospinal fluid. Tree-structured analysis via recursive partitioning was used to develop the model. SBI or No-SBI was the dichotomous outcome variable, and age, temperature, urinalysis (UA), white blood cell (WBC) count, absolute neutrophil count, and cerebrospinal fluid WBC were entered as potential predictors. The model was tested by V-fold cross-validation. Of 5279 febrile infants studied, SBI was diagnosed in 373 patients (7%): 316 urinary tract infections (UTIs), 17 meningitis, and 59 bacteremia (8 with meningitis, 11 with UTIs). The model sequentially used 4 clinical parameters to define high-risk patients: positive UA, WBC count >/=20 000/mm(3) or /=39.6 degrees C, and age <13 days. The sensitivity of the model for SBI is 82% (95% confidence interval [CI]: 78%-86%) and the negative predictive value is 98.3% (95% CI: 97.8%-98.7%). The negative predictive value for bacteremia or meningitis is 99.6% (95% CI: 99.4%-99.8%). The relative risk between high- and low-risk groups is 12.1 (95% CI: 9.3-15.6). Sixty-six SBI patients (18%) were misclassified into the lower risk group: 51 UTIs, 14 with bacteremia, and 1 with meningitis. Decision-tree analysis using common clinical variables can reasonably predict febrile infants at high-risk for SBI. Sequential use of UA, WBC count, temperature, and age can identify infants who are at high risk of SBI with a relative risk of 12.1 compared with lower-risk infants.

  4. Space-dependent step features: Transient breakdown of slow-roll, homogeneity, and isotropy during inflation

    International Nuclear Information System (INIS)

    Lerner, Rose N.; McDonald, John

    2009-01-01

    A step feature in the inflaton potential can model a transient breakdown of slow-roll inflation. Here we generalize the step feature to include space-dependence, allowing it also to model a breakdown of homogeneity and isotropy. The space-dependent inflaton potential generates a classical curvature perturbation mode characterized by the wave number of the step inhomogeneity. For inhomogeneities small compared with the horizon at the step, space-dependence has a small effect on the curvature perturbation. Therefore, the smoothly oscillating quantum power spectrum predicted by the homogeneous step is robust with respect to subhorizon space-dependence. For inhomogeneities equal to or greater than the horizon at the step, the space-dependent classical mode can dominate, producing a curvature perturbation in which modes of wave number determined by the step inhomogeneity are superimposed on the oscillating power spectrum. Generation of a space-dependent step feature may therefore provide a mechanism to introduce primordial anisotropy into the curvature perturbation. Space-dependence also modifies the quantum fluctuations, in particular, via resonancelike features coming from mode coupling to amplified superhorizon modes. However, these effects are small relative to the classical modes.

  5. Development and validation of a CFD model predicting the backfill process of a nuclear waste gallery

    International Nuclear Information System (INIS)

    Gopala, Vinay Ramohalli; Lycklama a Nijeholt, Jan-Aiso; Bakker, Paul; Haverkate, Benno

    2011-01-01

    dynamics (CFD) tool box. Volume of fluid method (VOF) is used to track the interface between grout and air. The CFD model is validated and tested in three steps. First, the numerical implementation of the Bingham model is verified against an analytical solution for a channel flow. Second, the capability of the model for the prediction of the flow of grout is tested by means of a comparison of the simulations with experimental results from two standard flowability tests for concrete: the V-funnel flow time and slump flow tests. As a third step, the CFD model is compared with experiments in a transparent Plexiglas experimental test setup performed at Delft University of Technology, to test the model under more practical and realistic conditions. This experimental setup is a 1:12.5 scaled version of the setup of the full-scale mock-up test for backfilling of a waste gallery with emplaced canisters used in the European 6th framework project ESDRED (). Furthermore, the plexiglas setup is used to study the influence of different backfill parameters. The CFD results for a channel flow shows good comparison against the analytical solution, demonstrating the correct implementation of the Bingham model in OpenFOAM. Also, the CFD results for the flowability tests show very good comparison with the experimental results, thereby ensuring a good prediction of the flow of grout. The simulations of the backfill process show good qualitative comparison with the plexiglas experiment. However, occurrence of segregation and also varying rheological properties of the grout in the plexiglas experiment results in significant differences between the simulation and the experiment.

  6. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    Science.gov (United States)

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  7. Predictive based monitoring of nuclear plant component degradation using support vector regression

    International Nuclear Information System (INIS)

    Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2015-01-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component's respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  8. Development and validation of a prediction model for loss of physical function in elderly hemodialysis patients.

    Science.gov (United States)

    Fukuma, Shingo; Shimizu, Sayaka; Shintani, Ayumi; Kamitani, Tsukasa; Akizawa, Tadao; Fukuhara, Shunichi

    2017-09-05

    Among aging hemodialysis patients, loss of physical function has become a major issue. We developed and validated a model of predicting loss of physical function among elderly hemodialysis patients. We conducted a cohort study involving maintenance hemodialysis patients  ≥65 years of age from the Dialysis Outcomes and Practice Pattern Study in Japan. The derivation cohort included 593 early phase (1996-2004) patients and the temporal validation cohort included 447 late-phase (2005-12) patients. The main outcome was the incidence of loss of physical function, defined as the 12-item Short Form Health Survey physical function score decreasing to 0 within a year. Using backward stepwise logistic regression by Akaike's Information Criteria, six predictors (age, gender, dementia, mental health, moderate activity and ascending stairs) were selected for the final model. Points were assigned based on the regression coefficients and the total score was calculated by summing the points for each predictor. In total, 65 (11.0%) and 53 (11.9%) hemodialysis patients lost their physical function within 1 year in the derivation and validation cohorts, respectively. This model has good predictive performance quantified by both discrimination and calibration. The proportion of the loss of physical function increased sequentially through low-, middle-, and high-score categories based on the model (2.5%, 11.7% and 22.3% in the validation cohort, respectively). The loss of physical function was strongly associated with 1-year mortality [adjusted odds ratio 2.48 (95% confidence interval 1.26-4.91)]. We developed and validated a risk prediction model with good predictive performance for loss of physical function in elderly hemodialysis patients. Our simple prediction model may help physicians and patients make more informed decisions for healthy longevity. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA.

  9. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  10. Prediction of flux loss in a Nd-Fe-B ring magnet considering magnetizing process

    International Nuclear Information System (INIS)

    Fukunaga, H; Koreeda, H; Yanai, T; Nakano, M; Yamashita, F

    2010-01-01

    We developed a technique to predict flux loss of a magnet with a complicated magnetization pattern using the finite element method. The developed method consists of four steps. At first, the distribution of magnetization under magnetizing field is analyzed (Step 1), and a demagnetization curve of each element is deduced from the result of the first step (Step 2). After removing the magnetizing field, the distributions of magnetization at room and elevated temperatures are analyzed by using demagnetization curves determined in Step 2 (Step 3). Based on a physical model, the distribution of flux loss due to exposure at the elevated temperature is predicted by using the result obtained in Step 3 (Step 4). We applied this technique to a ring magnet with 10 poles, and large flux loss values were predicted at the transition regions between magnetic poles.

  11. [Development and Application of a Performance Prediction Model for Home Care Nursing Based on a Balanced Scorecard using the Bayesian Belief Network].

    Science.gov (United States)

    Noh, Wonjung; Seomun, Gyeongae

    2015-06-01

    This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.

  12. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  13. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP

  14. An improved model to predict nonuniform deformation of Zr-2.5 Nb pressure tubes

    International Nuclear Information System (INIS)

    Lei, Q.M.; Fan, H.Z.

    1997-01-01

    Present circular pressure-tube ballooning models in most fuel channel codes assume that the pressure tube remains circular during ballooning. This model provides adequate predictions of pressure-tube ballooning behaviour when the pressure tube (PT) and the calandria tube (CT) are concentric and when a small (<100 degrees C) top-to-bottom circumferential temperature gradient is present on the pressure tube. However, nonconcentric ballooning is expected to occur under certain postulated CANDU (CANada Deuterium Uranium) accident conditions. This circular geometry assumption prevents the model from accurately predicting nonuniform pressure-tube straining and local PT/CT contact when the pressure tube is subjected to a large circumferential temperature gradient and consequently deforms in a noncircular pattern. This paper describes an improved model that predicts noncircular pressure-tube deformation. Use of this model (once fully validated) will reduce uncertainties in the prediction of pressure-tube ballooning during a postulated loss-of-coolant accident (LOCA) in a CANDU reactor. The noncircular deformation model considers a ring or cross-section of a pressure tube with unit axial length to calculate deformation in the radial and circumferential directions. The model keeps track of the thinning of the pressure-tube wall as well as the shape deviation from a reference circle. Such deviation is expressed in a cosine Fourier series for the lateral symmetry case. The coefficients of the series for the first m terms are calculated by solving a set of algebraic equations at each time step. The model also takes into account the effects of pressure-tube sag or bow on ballooning, using an input value of the offset distance between the centre of the calandria tube and the initial centre of the pressure tube for determining the position radius of the pressure tube. One significant improvement realized in using the noncircular deformation model is a more accurate prediction in

  15. The NIST Step Class Library (Step Into the Future)

    Science.gov (United States)

    1990-09-01

    Figure 6. Excerpt from a STEP exclange file based on the Geometry model 1be NIST STEP Class Libary Page 13 An issue of concern in this...Scheifler, R., Gettys, J., and Newman, P., X Window System: C Library and Protocol Reference. Digital Press, Bedford, Mass, 1988. [Schenck90] Schenck, D

  16. Robotic retroperitoneal partial nephrectomy: a step-by-step guide.

    Science.gov (United States)

    Ghani, Khurshid R; Porter, James; Menon, Mani; Rogers, Craig

    2014-08-01

    To describe a step-by-step guide for successful implementation of the retroperitoneal approach to robotic partial nephrectomy (RPN) PATIENTS AND METHODS: The patient is placed in the flank position and the table fully flexed to increase the space between the 12th rib and iliac crest. Access to the retroperitoneal space is obtained using a balloon-dilating device. Ports include a 12-mm camera port, two 8-mm robotic ports and a 12-mm assistant port placed in the anterior axillary line cephalad to the anterior superior iliac spine, and 7-8 cm caudal to the ipsilateral robotic port. Positioning and port placement strategies for successful technique include: (i) Docking robot directly over the patient's head parallel to the spine; (ii) incision for camera port ≈1.9 cm (1 fingerbreadth) above the iliac crest, lateral to the triangle of Petit; (iii) Seldinger technique insertion of kidney-shaped balloon dilator into retroperitoneal space; (iv) Maximising distance between all ports; (v) Ensuring camera arm is placed in the outer part of the 'sweet spot'. The retroperitoneal approach to RPN permits direct access to the renal hilum, no need for bowel mobilisation and excellent visualisation of posteriorly located tumours. © 2014 The Authors. BJU International © 2014 BJU International.

  17. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  18. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  19. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  20. Learning to predict chemical reactions.

    Science.gov (United States)

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  1. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  2. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  3. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  4. Linking spring phenology with mechanistic models of host movement to predict disease transmission risk

    Science.gov (United States)

    Merkle, Jerod A.; Cross, Paul C.; Scurlock, Brandon M.; Cole, Eric K.; Courtemanch, Alyson B.; Dewey, Sarah R.; Kauffman, Matthew J.

    2018-01-01

    Disease models typically focus on temporal dynamics of infection, while often neglecting environmental processes that determine host movement. In many systems, however, temporal disease dynamics may be slow compared to the scale at which environmental conditions alter host space-use and accelerate disease transmission.Using a mechanistic movement modelling approach, we made space-use predictions of a mobile host (elk [Cervus Canadensis] carrying the bacterial disease brucellosis) under environmental conditions that change daily and annually (e.g., plant phenology, snow depth), and we used these predictions to infer how spring phenology influences the risk of brucellosis transmission from elk (through aborted foetuses) to livestock in the Greater Yellowstone Ecosystem.Using data from 288 female elk monitored with GPS collars, we fit step selection functions (SSFs) during the spring abortion season and then implemented a master equation approach to translate SSFs into predictions of daily elk distribution for five plausible winter weather scenarios (from a heavy snow, to an extreme winter drought year). We predicted abortion events by combining elk distributions with empirical estimates of daily abortion rates, spatially varying elk seroprevelance and elk population counts.Our results reveal strong spatial variation in disease transmission risk at daily and annual scales that is strongly governed by variation in host movement in response to spring phenology. For example, in comparison with an average snow year, years with early snowmelt are predicted to have 64% of the abortions occurring on feedgrounds shift to occurring on mainly public lands, and to a lesser extent on private lands.Synthesis and applications. Linking mechanistic models of host movement with disease dynamics leads to a novel bridge between movement and disease ecology. Our analysis framework offers new avenues for predicting disease spread, while providing managers tools to proactively mitigate

  5. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  6. Angular correlations of α-particles from decay of 40Ca following fusion of 28Si + 12C

    International Nuclear Information System (INIS)

    Alamanos, N.; Le Metayer, C.; Levi, C.; Mittig, W.; Papineau, L.

    1982-01-01

    Angular correlations of α-particles from decay of 40 Ca following fusion of 28 Si + 12 C were measured. The results for events leading to the ground state of 32 S were quantitatively analysed, using the statistical model. Angular correlations in appropriate experimental conditions permitted to verify angular momentum selection predictions for each of the steps involved. Whereas the mean behaviour is well reproduced, more detailed comparison shows significant disagreement. Strongly structured coincident energy spectra were observed. It is shown that these structures are not compatible with standard statistical level densities

  7. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  8. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  9. Distress among women receiving uninformative BRCA1/2 results: 12-month outcomes.

    Science.gov (United States)

    O'Neill, Suzanne C; Rini, Christine; Goldsmith, Rachel E; Valdimarsdottir, Heiddis; Cohen, Lawrence H; Schwartz, Marc D

    2009-10-01

    Few data are available regarding the long-term psychological impact of uninformative BRCA1/2 test results. This study examines change in distress from pretesting to 12-months post-disclosure, with medical, family history, and psychological variables, such as pretesting perceived risk of carrying a deleterious mutation prior to testing and primary and secondary appraisals, as predictors. Two hundred and nine women with uninformative BRCA1/2 test results completed questionnaires at pretesting and 1-, 6-, and 12-month post-disclosure, including measures of anxiety and depression, cancer-specific and genetic testing distress. We used a mixed models approach to predict change in post-disclosure distress. Distress declined from pretesting to 1-month post-disclosure, but remained stable thereafter. Primary appraisals predicted all types of distress at 1-month post-disclosure. Primary and secondary appraisals predicted genetic testing distress at 1-month as well as change over time. Receiving a variant of uncertain clinical significance and entering testing with a high expectation for carrying a deleterious mutation predicted genetic testing distress that persisted through the year after testing. As a whole, women receiving uninformative BRCA1/2 test results are a resilient group. For some women, distress experienced in the month after testing does not dissipate. Variables, such as heightened pretesting perceived risk and cognitive appraisals, predict greater likelihood for sustained distress in this group and could be amenable to intervention.

  10. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  11. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  12. Use of integrated analogue and numerical modelling to predict tridimensional fracture intensity in fault-related-folds.

    Science.gov (United States)

    Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio

    2016-04-01

    Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.

  13. Near-surface hydrogeological model of Laxemar. Open repository - Laxemar 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Bosson, Emma

    2006-07-15

    This report presents the methodology and the results from the modelling of an open final repository for spent nuclear fuel in Laxemar. Thus, the present work analyses the hydrological effects of the planned repository during the construction and operational phases when it is open, i.e. air-filled, and hence may cause a disturbance of the hydrological conditions in the surroundings. The numerical modelling is based on the conceptual and descriptive model presented in the version 1.2 Site Descriptive Model (SDM) for Laxemar. The modelling was divided into three steps. The first step was to update the L1.2 version model for hydrology and near surface hydrogeology, the main updates were related to the hydraulic properties of the bedrock and the size of the model area. The next step was to describe the conditions for the introductory construction by implementing the access tunnel and shafts to the model. The third step aimed at describing the consequences on the surface hydrology caused by an open repository. A sensitivity analysis that aimed to investigate the sensitivity of the model to the properties of the upper bedrock and the properties in the interface between the Quaternary deposits and the bedrock was performed as a part of steps two and three. The model covers an area of 19 km{sup 2}. In the Quaternary deposits, the surface water divides are assumed to coincide with the groundwater divides, thus a no-flow boundary condition is used at the horizontal boundaries. The transient top boundary condition uses meteorological data gathered at a local SKB station at Aespoe during 2004. The bottom boundary condition and the horizontal boundary condition in the bedrock is a steady state head boundary condition taken from the open repository modelling of the bedrock performed as a parallel activity with the modelling tool DarcyTools. The vertical extent of the model is from the ground surface to 150 m below sea level. Since the repository will be built at 450 m below sea

  14. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  15. Step length after discrete perturbation predicts accidental falls and fall-related injury in elderly people with a range of peripheral neuropathy.

    Science.gov (United States)

    Allet, Lara; Kim, Hogene; Ashton-Miller, James; De Mott, Trina; Richardson, James K

    2014-01-01

    Distal symmetric polyneuropathy increases fall risk due to inability to cope with perturbations. We aimed to 1) identify the frontal plane lower limb sensorimotor functions which are necessary for robustness to a discrete, underfoot perturbation during gait; and 2) determine whether changes in the post-perturbed step parameters could distinguish between fallers and non fallers. Forty-two subjects (16 healthy old and 26 with diabetic PN) participated. Frontal plane lower limb sensorimotor functions were determined using established laboratory-based techniques. The subjects' most extreme alterations in step width or step length in response to a perturbation were measured. In addition, falls and fall-related injuries were prospectively recorded. Ankle proprioceptive threshold (APrT; p=.025) and hip abduction rate of torque generation (RTG; p=.041) independently predicted extreme step length after medial perturbation, with precise APrT and greater hip RTG allowing maintenance of step length. Injured subjects demonstrated greater extreme step length changes after medial perturbation than non-injured subjects (percent change = 18.5 ± 9.2 vs. 11.3 ± 4.57; p = .01). The ability to rapidly generate frontal plane hip strength and/or precisely perceive motion at the ankle is needed to maintain a normal step length after perturbation, a parameter which distinguishes between subjects sustaining a fall-related injury and those who did not. © 2014.

  16. Step length after discrete perturbation predicts accidental falls and fall-related injury in elderly people with a range of peripheral neuropathy

    Science.gov (United States)

    Allet, L; Kim, H; Ashton-Miller, JA; De Mott, T; Richardson, JK

    2013-01-01

    Aims Distal symmetric polyneuropathy increases fall risk due to inability to cope with perturbations. We aimed to 1) identify the frontal plane lower limb sensorimotor functions which are necessary for robustness to a discrete, underfoot perturbation during gait; and 2) determine whether changes in the post-perturbed step parameters could distinguish between fallers and non fallers. Methods Forty-two subjects (16 healthy old and 26 with diabetic PN) participated. Frontal plane lower limb sensorimotor functions were determined using established laboratory-based techniques. The subjects' most extreme alterations in step width or step length in response to a perturbation were measured. In addition, falls and fall-related injuries were prospectively recorded. Results Ankle proprioceptive threshold (APrT; p=.025) and hip abduction rate of torque generation (RTG; p=.041) independently predicted extreme step length after medial perturbation, with precise APrT and greater hip RTG allowing maintenance of step length. Fallers demonstrated greater extreme step length changes after medial perturbation than non fallers (percent change = 16.41±8.42 vs 11.0±4.95; p=.06) Conclusions The ability to rapidly generate frontal plane hip strength and/or precisely perceive motion at the ankle is needed to maintain a normal step length after perturbation, a parameter, which distinguishes between fallers and non fallers. PMID:24183899

  17. Co-digestion of solid waste: Towards a simple model to predict methane production.

    Science.gov (United States)

    Kouas, Mokhles; Torrijos, Michel; Schmitz, Sabine; Sousbie, Philippe; Sayadi, Sami; Harmand, Jérôme

    2018-04-01

    Modeling methane production is a key issue for solid waste co-digestion. Here, the effect of a step-wise increase in the organic loading rate (OLR) on reactor performance was investigated, and four new models were evaluated to predict methane yields using data acquired in batch mode. Four co-digestion experiments of mixtures of 2 solid substrates were conducted in semi-continuous mode. Experimental methane yields were always higher than the BMP values of mixtures calculated from the BMP of each substrate, highlighting the importance of endogenous production (methane produced from auto-degradation of microbial community and generated solids). The experimental methane productions under increasing OLRs corresponded well to the modeled data using the model with constant endogenous production and kinetics identified at 80% from total batch time. This model provides a simple and useful tool for technical design consultancies and plant operators to optimize the co-digestion and the choice of the OLRs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  19. QSPR models for predicting generator-column-derived octanol/water and octanol/air partition coefficients of polychlorinated biphenyls.

    Science.gov (United States)

    Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-06-01

    Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  1. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  2. Increasing Running Step Rate Reduces Patellofemoral Joint Forces

    Science.gov (United States)

    Lenhart, Rachel L.; Thelen, Darryl G.; Wille, Christa M.; Chumanov, Elizabeth S.; Heiderscheit, Bryan C.

    2013-01-01

    Purpose Increasing step rate has been shown to elicit changes in joint kinematics and kinetics during running, and has been suggested as a possible rehabilitation strategy for runners with patellofemoral pain. The purpose of this study was to determine how altering step rate affects internal muscle forces and patellofemoral joint loads, and then to determine what kinematic and kinetic factors best predict changes in joint loading. Methods We recorded whole body kinematics of 30 healthy adults running on an instrumented treadmill at three step rate conditions (90%, 100%, and 110% of preferred step rate). We then used a 3D lower extremity musculoskeletal model to estimate muscle, patellar tendon, and patellofemoral joint forces throughout the running gait cycles. Additionally, linear regression analysis allowed us to ascertain the relative influence of limb posture and external loads on patellofemoral joint force. Results Increasing step rate to 110% of preferred reduced peak patellofemoral joint force by 14%. Peak muscle forces were also altered as a result of the increased step rate with hip, knee and ankle extensor forces, and hip abductor forces all reduced in mid-stance. Compared to the 90% step rate condition, there was a concomitant increase in peak rectus femoris and hamstring loads during early and late swing, respectively, at higher step rates. Peak stance phase knee flexion decreased with increasing step rate, and was found to be the most important predictor of the reduction in patellofemoral joint loading. Conclusion Increasing step rate is an effective strategy to reduce patellofemoral joint forces and could be effective in modulating biomechanical factors that can contribute to patellofemoral pain. PMID:23917470

  3. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  4. Model Predictive Control of Power Converters for Robust and Fast Operation of AC Microgrids

    DEFF Research Database (Denmark)

    Dragicevic, Tomislav

    2018-01-01

    the load power at the same time. Those functionalities are conventionally achieved by hierarchical linear control loops. However, they have limited transient response and high sensitivity to parameter variations. This paper aims to mitigate these problems by firstly introducing an improvement of the FCS......This paper proposes the application of a finite control set model predictive control (FCS-MPC) strategy in standalone ac microgrids (MGs). AC MGs are usually built from two or more voltage source converters (VSCs) which can regulate the voltage at the point of common coupling, while sharing......-MPC strategy for a single VSC based on tracking of derivative of the voltage reference trajectory. Using only a single step prediction horizon, the proposed strategy exhibits low computational expense but provides steady state performance comparable to PWM, while its transient response and robustness...

  5. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  6. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  7. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  8. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  9. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  10. Optimization of Two-Step Acid-Catalyzed Hydrolysis of Oil Palm Empty Fruit Bunch for High Sugar Concentration in Hydrolysate

    Directory of Open Access Journals (Sweden)

    Dongxu Zhang

    2014-01-01

    Full Text Available Getting high sugar concentrations in lignocellulosic biomass hydrolysate with reasonable yields of sugars is commercially attractive but very challenging. Two-step acid-catalyzed hydrolysis of oil palm empty fruit bunch (EFB was conducted to get high sugar concentrations in the hydrolysate. The biphasic kinetic model was used to guide the optimization of the first step dilute acid-catalyzed hydrolysis of EFB. A total sugar concentration of 83.0 g/L with a xylose concentration of 69.5 g/L and a xylose yield of 84.0% was experimentally achieved, which is in well agreement with the model predictions under optimal conditions (3% H2SO4 and 1.2% H3PO4, w/v, liquid to solid ratio 3 mL/g, 130°C, and 36 min. To further increase total sugar and xylose concentrations in hydrolysate, a second step hydrolysis was performed by adding fresh EFB to the hydrolysate at 130°C for 30 min, giving a total sugar concentration of 114.4 g/L with a xylose concentration of 93.5 g/L and a xylose yield of 56.5%. To the best of our knowledge, the total sugar and xylose concentrations are the highest among those ever reported for acid-catalyzed hydrolysis of lignocellulose.

  11. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  12. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  13. Fuels planning: science synthesis and integration; environmental consequences fact sheet 12: Water Erosion Prediction Project (WEPP) Fuel Management (FuMe) tool

    Science.gov (United States)

    William Elliot; David Hall

    2005-01-01

    The Water Erosion Prediction Project (WEPP) Fuel Management (FuMe) tool was developed to estimate sediment generated by fuel management activities. WEPP FuMe estimates sediment generated for 12 fuel-related conditions from a single input. This fact sheet identifies the intended users and uses, required inputs, what the model does, and tells the user how to obtain the...

  14. A predictive model for swallowing dysfunction after curative radiotherapy in head and neck cancer

    International Nuclear Information System (INIS)

    Langendijk, Johannes A.; Doornaert, Patricia; Rietveld, Derek H.F.; Verdonck-de Leeuw, Irma M.; Rene Leemans, C.; Slotman, Ben J.

    2009-01-01

    Introduction: Recently, we found that swallowing dysfunction after curative (chemo) radiation (CH) RT has a strong negative impact on health-related quality of life (HRQoL), even more than xerostomia. The purpose of this study was to design a predictive model for swallowing dysfunction after curative radiotherapy or chemoradiation. Materials and methods: A prospective study was performed including 529 patients with head and neck squamous cell carcinoma (HNSCC) treated with curative (CH) RT. In all patients, acute and late radiation-induced morbidity (RTOG Acute and Late Morbidity Scoring System) was scored prospectively. To design the model univariate and multivariate logistic regression analyses were carried out with grade 2 or higher RTOG swallowing dysfunction at 6 months as the primary (SWALL 6months ) endpoint. The model was validated by comparing the predicted and observed complication rates and by testing if the model also predicted acute dysphagia and late dysphagia at later time points (12, 18 and 24 months). Results: After univariate and multivariate logistic regression analyses, the following factors turned out to be independent prognostic factors for SWALL 6months : T3-T4, bilateral neck irradiation, weight loss prior to radiation, oropharyngeal and nasopharyngeal tumours, accelerated radiotherapy and concomitant chemoradiation. By summation of the regression coefficients derived from the multivariate model, the Total Dysphagia Risk Score (TDRS) could be calculated. In the logistic regression model, the TDRS was significantly associated with SWALL 6months ((p 6months was 5%, 24% and 46% in case of low-, intermediate- and high-risk patients, respectively. These observed percentages were within the 95% confidence intervals of the predicted values. The TDRS risk group classification was also significantly associated with acute dysphagia (P < 0.001 at all time points) and with late swallowing dysfunction at 12, 18 and 24 months (p < 0.001 at all time points

  15. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  16. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  17. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  18. Comparative analysis of single-step and two-step biodiesel production using supercritical methanol on laboratory-scale

    International Nuclear Information System (INIS)

    Micic, Radoslav D.; Tomić, Milan D.; Kiss, Ferenc E.; Martinovic, Ferenc L.; Simikić, Mirko Ð.; Molnar, Tibor T.

    2016-01-01

    Highlights: • Single-step supercritical transesterification compared to the two-step process. • Two-step process: oil hydrolysis and subsequent supercritical methyl esterification. • Experiments were conducted in a laboratory-scale batch reactor. • Higher biodiesel yields in two-step process at milder reaction conditions. • Two-step process has potential to be cost-competitive with the single-step process. - Abstract: Single-step supercritical transesterification and two-step biodiesel production process consisting of oil hydrolysis and subsequent supercritical methyl esterification were studied and compared. For this purpose, comparative experiments were conducted in a laboratory-scale batch reactor and optimal reaction conditions (temperature, pressure, molar ratio and time) were determined. Results indicate that in comparison to a single-step transesterification, methyl esterification (second step of the two-step process) produces higher biodiesel yields (95 wt% vs. 91 wt%) at lower temperatures (270 °C vs. 350 °C), pressures (8 MPa vs. 12 MPa) and methanol to oil molar ratios (1:20 vs. 1:42). This can be explained by the fact that the reaction system consisting of free fatty acid (FFA) and methanol achieves supercritical condition at milder reaction conditions. Furthermore, the dissolved FFA increases the acidity of supercritical methanol and acts as an acid catalyst that increases the reaction rate. There is a direct correlation between FFA content of the product obtained in hydrolysis and biodiesel yields in methyl esterification. Therefore, the reaction parameters of hydrolysis were optimized to yield the highest FFA content at 12 MPa, 250 °C and 1:20 oil to water molar ratio. Results of direct material and energy costs comparison suggest that the process based on the two-step reaction has the potential to be cost-competitive with the process based on single-step supercritical transesterification. Higher biodiesel yields, similar or lower energy

  19. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  20. Design of a particulate-monitoring network for the Y-12 plant

    International Nuclear Information System (INIS)

    Hougland, E.S.; Oakes, T.W.; Underwood, J.N.

    1982-01-01

    An Air Quality Monitoring Network Design (AQMND) with multiple objectives is being developed for the Y-12 Plant production facilities. The objectives are: Y-12 facility surveillance; monitoring the transport of Y-12 generated airborne effluents towards either the Oak Ridge National Laboratory or the developed region of the City of Oak Ridge; and monitoring population exposure in residential areas close to the Y-12 Plant. A two step design process was carried out, using the Air Quality Monitor Network Design Model (AQMND) previously used for the Oak Ridge National Laboratory network. In the first step of the design we used existing air quality monitor locations, subjectively designated locations, and grid intersections as a set of potential monitor sites. The priority sites from the first step (modified to account for terrain and accessibility), and subjectively designated sites, were used as the potential monitor sites for the second step of the process which produced the final design recommendations for the monitor network