WorldWideScience

Sample records for models predicted similar

  1. Semantic Similarity, Predictability, and Models of Sentence Processing

    Science.gov (United States)

    Roland, Douglas; Yun, Hongoak; Koenig, Jean-Pierre; Mauner, Gail

    2012-01-01

    The effects of word predictability and shared semantic similarity between a target word and other words that could have taken its place in a sentence on language comprehension are investigated using data from a reading time study, a sentence completion study, and linear mixed-effects regression modeling. We find that processing is facilitated if…

  2. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  3. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  4. Predictive modeling of human perception subjectivity: feasibility study of mammographic lesion similarity

    Science.gov (United States)

    Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia

    2012-02-01

    The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.

  5. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling.

    Science.gov (United States)

    van Westen, Gerard J P; Bender, Andreas; Overington, John P

    2014-10-01

    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of 'orthogonally resistant' agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed 'proteochemometric modelling' (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.

  6. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2017-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  7. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  8. A study of the predictive model on the user reaction time using the information amount and similarity

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, S.H.

    2004-01-01

    Human operations through a user interface are divided into two types. The one is the single operation that is performed on a static interface. The other is the sequential operation that achieves a goal by handling several displays through operator's navigation in the crt-based console. Sequential operation has similar meaning with continuous task. Most operations in recently developed computer applications correspond to the sequential operation, and the single operation can be considered as a part of the sequential operation. In the area of HCI (human computer interaction) evaluation, the Hick-Hyman law counts as the most powerful theory. The most important factor in the equation of Hick-Hyman law about choice reaction time is the quantified amount of information conveyed by a statement, stimulus, or event. Generally, we can expect that if there are some similarities between a series of interfaces, human operator is able to use his attention resource effectively. That is the performance of human operator is increased by the similarity. The similarity may be able to affect the allocation of attention resource based on separate STSS (short-term sensory store) and long-term memory. There are theories related with this concept, which are task switching paradigm and the law of practice. However, it is not easy to explain the human operator performance with only the similarity or the information amount. There are few theories to explain the performance with the combination of the similarity and the information amount. The objective of this paper is to purpose and validate the quantitative and predictive model on the user reaction time in CRT-based displays. Another objective is to validate various theories related with human cognition and perception, which are Hick-Hyman law and the law of practice as representative theories. (author)

  9. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  10. The similarity principle - on using models correctly

    DEFF Research Database (Denmark)

    Landberg, L.; Mortensen, N.G.; Rathmann, O.

    2003-01-01

    This paper will present some guiding principles on the most accurate use of the WAsP program in particular, but the principle can be applied to the use of any linear model which predicts some quantity at one location based on another. We have felt a need to lay out these principles out explicitly......, due to the many, many users and the uses (and misuses) of the WAsP program. Put simply, the similarity principle states that one should chose a predictor site which – in as many ways as possible – is similar to the predicted site....

  11. Similarity metrics for surgical process models.

    Science.gov (United States)

    Neumuth, Thomas; Loebe, Frank; Jannin, Pierre

    2012-01-01

    The objective of this work is to introduce a set of similarity metrics for comparing surgical process models (SPMs). SPMs are progression models of surgical interventions that support quantitative analyses of surgical activities, supporting systems engineering or process optimization. Five different similarity metrics are presented and proven. These metrics deal with several dimensions of process compliance in surgery, including granularity, content, time, order, and frequency of surgical activities. The metrics were experimentally validated using 20 clinical data sets each for cataract interventions, craniotomy interventions, and supratentorial tumor resections. The clinical data sets were controllably modified in simulations, which were iterated ten times, resulting in a total of 600 simulated data sets. The simulated data sets were subsequently compared to the original data sets to empirically assess the predictive validity of the metrics. We show that the results of the metrics for the surgical process models correlate significantly (pmetrics meet predictive validity. The clinical use of the metrics was exemplarily, as demonstrated by assessment of the learning curves of observers during surgical process model acquisition. Measuring similarity between surgical processes is a complex task. However, metrics for computing the similarity between surgical process models are needed in many uses in the field of medical engineering. These metrics are essential whenever two SPMs need to be compared, such as during the evaluation of technical systems, the education of observers, or the determination of surgical strategies. These metrics are key figures that provide a solid base for medical decisions, such as during validation of sensor systems for use in operating rooms in the future. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  13. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  14. ESG: extended similarity group method for automated protein function prediction.

    Science.gov (United States)

    Chitale, Meghana; Hawkins, Troy; Park, Changsoon; Kihara, Daisuke

    2009-07-15

    Importance of accurate automatic protein function prediction is ever increasing in the face of a large number of newly sequenced genomes and proteomics data that are awaiting biological interpretation. Conventional methods have focused on high sequence similarity-based annotation transfer which relies on the concept of homology. However, many cases have been reported that simple transfer of function from top hits of a homology search causes erroneous annotation. New methods are required to handle the sequence similarity in a more robust way to combine together signals from strongly and weakly similar proteins for effectively predicting function for unknown proteins with high reliability. We present the extended similarity group (ESG) method, which performs iterative sequence database searches and annotates a query sequence with Gene Ontology terms. Each annotation is assigned with probability based on its relative similarity score with the multiple-level neighbors in the protein similarity graph. We will depict how the statistical framework of ESG improves the prediction accuracy by iteratively taking into account the neighborhood of query protein in the sequence similarity space. ESG outperforms conventional PSI-BLAST and the protein function prediction (PFP) algorithm. It is found that the iterative search is effective in capturing multiple-domains in a query protein, enabling accurately predicting several functions which originate from different domains. ESG web server is available for automated protein function prediction at http://dragon.bio.purdue.edu/ESG/.

  15. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  16. Multimodal Similarity Gaussian Process Latent Variable Model.

    Science.gov (United States)

    Song, Guoli; Wang, Shuhui; Huang, Qingming; Tian, Qi

    2017-09-01

    Data from real applications involve multiple modalities representing content with the same semantics from complementary aspects. However, relations among heterogeneous modalities are simply treated as observation-to-fit by existing work, and the parameterized modality specific mapping functions lack flexibility in directly adapting to the content divergence and semantic complicacy in multimodal data. In this paper, we build our work based on the Gaussian process latent variable model (GPLVM) to learn the non-parametric mapping functions and transform heterogeneous modalities into a shared latent space. We propose multimodal Similarity Gaussian Process latent variable model (m-SimGP), which learns the mapping functions between the intra-modal similarities and latent representation. We further propose multimodal distance-preserved similarity GPLVM (m-DSimGP) to preserve the intra-modal global similarity structure, and multimodal regularized similarity GPLVM (m-RSimGP) by encouraging similar/dissimilar points to be similar/dissimilar in the latent space. We propose m-DRSimGP, which combines the distance preservation in m-DSimGP and semantic preservation in m-RSimGP to learn the latent representation. The overall objective functions of the four models are solved by simple and scalable gradient decent techniques. They can be applied to various tasks to discover the nonlinear correlations and to obtain the comparable low-dimensional representation for heterogeneous modalities. On five widely used real-world data sets, our approaches outperform existing models on cross-modal content retrieval and multimodal classification.

  17. Do number of days with low back pain and patterns of episodes of pain have similar outcomes in a biopsychosocial prediction model?

    DEFF Research Database (Denmark)

    Lemeunier, N; Leboeuf-Yde, C; Gagey, O

    2016-01-01

    are similar, regardless which of the two classifications is used. METHOD: During 1 year, 49- or 50-year-old people from the Danish general population were sent fortnightly automated text messages (SMS-Track) asking them if they had any LBP in the past fortnight. Responses for the whole year were......PURPOSES: We used two different methods to classify low back pain (LBP) in the general population (1) to assess the overlapping of individuals within the different subgroups in those two classifications, (2) to explore if the associations between LBP and some selected bio-psychosocial factors...... with a questionnaire at baseline 9 years earlier, were entered into regression models to investigate their associations with the subgroups of the two classifications of LBP and the results compared. RESULTS: The percentage of agreement between categories of the two classification systems was above 68 % (Kappa 0...

  18. Modeling of Hysteresis in Piezoelectric Actuator Based on Segment Similarity

    Directory of Open Access Journals (Sweden)

    Rui Xiong

    2015-11-01

    Full Text Available To successfully exploit the full potential of piezoelectric actuators in micro/nano positioning systems, it is essential to model their hysteresis behavior accurately. A novel hysteresis model for piezoelectric actuator is proposed in this paper. Firstly, segment-similarity, which describes the similarity relationship between hysteresis curve segments with different turning points, is proposed. Time-scale similarity, which describes the similarity relationship between hysteresis curves with different rates, is used to solve the problem of dynamic effect. The proposed model is formulated using these similarities. Finally, the experiments are performed with respect to a micro/nano-meter movement platform system. The effectiveness of the proposed model is verified as compared with the Preisach model. The experimental results show that the proposed model is able to precisely predict the hysteresis trajectories of piezoelectric actuators and performs better than the Preisach model.

  19. Prediction of maize hybrid performance using similarity in state and similarity by descent information.

    Science.gov (United States)

    Ferreira, D V; Von Pinho, R G; Balestre, M; Oliveira, R L

    2010-12-14

    We evaluated the efficiency of the best linear unbiased predictor (BLUP) and the influence of the use of similarity in state (SIS) and similarity by descent (SBD) in the prediction of untested maize hybrids. Nine inbred lines of maize were crossed using a randomized complete diallel method. These materials were genotyped with 48 microsatellite markers (SSR) associated with the QTL regions for grain yield. Estimates of four coefficients of SIS and four coefficients of SBD were used to construct the additive genetic and dominance matrices, which were later used in combination with the BLUP for predicting genotypic values and specific combining ability (SCA) in unanalyzed hybrids under simulated unbalance. The values of correlations between the genotypic values predicted and the means observed, depending on the degree of unbalance, ranged from 0.48 to 0.99 for SIS and 0.40 to 0.99 using information from SBD. The results obtained for the SCA ranged from 0.26 to 0.98 using the SIS and 0.001 to 0.990 using the SBD information. It was also observed that the predictions using SBD showed less biased than SIS predictions demonstrating that the predictions obtained by these coefficients (SBD) were closer to the observed value, but were less efficient in the ranking of genotypes. Although the SIS showed a bias due to overestimation of relatedness, this type of coefficient may be used where low values are detected in the SBD in the group of parents because of its greater efficiency in ranking the candidates hybrids.

  20. Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors

    Science.gov (United States)

    Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean

    2016-01-01

    A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?

  1. Similarities and Differences Between Warped Linear Prediction and Laguerre Linear Prediction

    NARCIS (Netherlands)

    Brinker, Albertus C. den; Krishnamoorthi, Harish; Verbitskiy, Evgeny A.

    2011-01-01

    Linear prediction has been successfully applied in many speech and audio processing systems. This paper presents the similarities and differences between two classes of linear prediction schemes, namely, Warped Linear Prediction (WLP) and Laguerre Linear Prediction (LLP). It is shown that both

  2. A unified frame of predicting side effects of drugs by using linear neighborhood similarity.

    Science.gov (United States)

    Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining

    2017-12-14

    Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.

  3. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  4. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Science.gov (United States)

    Lee, Joon; Maslove, David M; Dubin, Joel A

    2015-01-01

    Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to

  5. Personalized Mortality Prediction Driven by Electronic Medical Data and a Patient Similarity Metric

    Science.gov (United States)

    Lee, Joon; Maslove, David M.; Dubin, Joel A.

    2015-01-01

    Background Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made. Methods and Findings We deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care. Conclusions The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our

  6. Personalized mortality prediction driven by electronic medical data and a patient similarity metric.

    Directory of Open Access Journals (Sweden)

    Joon Lee

    Full Text Available Clinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1 to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2 to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made.We deployed a cosine-similarity-based patient similarity metric (PSM to an intensive care unit (ICU database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care.The present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR systems, our novel medical data analytics

  7. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  8. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  9. Self-similar two-particle separation model

    DEFF Research Database (Denmark)

    Lüthi, Beat; Berg, Jacob; Ott, Søren

    2007-01-01

    .g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...

  10. Towards a chromatographic similarity index to establish localised quantitative structure-retention relationships for retention prediction. II Use of Tanimoto similarity index in ion chromatography.

    Science.gov (United States)

    Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W

    2017-11-10

    Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright

  11. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  12. A comparative study of SMILES-based compound similarity functions for drug-target interaction prediction.

    Science.gov (United States)

    Öztürk, Hakime; Ozkirimli, Elif; Özgür, Arzucan

    2016-03-18

    Molecular structures can be represented as strings of special characters using SMILES. Since each molecule is represented as a string, the similarity between compounds can be computed using SMILES-based string similarity functions. Most previous studies on drug-target interaction prediction use 2D-based compound similarity kernels such as SIMCOMP. To the best of our knowledge, using SMILES-based similarity functions, which are computationally more efficient than the 2D-based kernels, has not been investigated for this task before. In this study, we adapt and evaluate various SMILES-based similarity methods for drug-target interaction prediction. In addition, inspired by the vector space model of Information Retrieval we propose cosine similarity based SMILES kernels that make use of the Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) weighting approaches. We also investigate generating composite kernels by combining our best SMILES-based similarity functions with the SIMCOMP kernel. With this study, we provided a comparison of 13 different ligand similarity functions, each of which utilizes the SMILES string of molecule representation. Additionally, TF and TF-IDF based cosine similarity kernels are proposed. The more efficient SMILES-based similarity functions performed similarly to the more complex 2D-based SIMCOMP kernel in terms of AUC-ROC scores. The TF-IDF based cosine similarity obtained a better AUC-PR score than the SIMCOMP kernel on the GPCR benchmark data set. The composite kernel of TF-IDF based cosine similarity and SIMCOMP achieved the best AUC-PR scores for all data sets.

  13. The continuous similarity model of bulk soil-water evaporation

    Science.gov (United States)

    Clapp, R. B.

    1983-01-01

    The continuous similarity model of evaporation is described. In it, evaporation is conceptualized as a two stage process. For an initially moist soil, evaporation is first climate limited, but later it becomes soil limited. During the latter stage, the evaporation rate is termed evaporability, and mathematically it is inversely proportional to the evaporation deficit. A functional approximation of the moisture distribution within the soil column is also included in the model. The model was tested using data from four experiments conducted near Phoenix, Arizona; and there was excellent agreement between the simulated and observed evaporation. The model also predicted the time of transition to the soil limited stage reasonably well. For one of the experiments, a third stage of evaporation, when vapor diffusion predominates, was observed. The occurrence of this stage was related to the decrease in moisture at the surface of the soil. The continuous similarity model does not account for vapor flow. The results show that climate, through the potential evaporation rate, has a strong influence on the time of transition to the soil limited stage. After this transition, however, bulk evaporation is independent of climate until the effects of vapor flow within the soil predominate.

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. Does implied community size predict likeability of a similar stranger?

    Science.gov (United States)

    Launay, Jacques; Dunbar, Robin I M

    2015-01-01

    Homophily, the tendency for people to cluster with similar others, has primarily been studied in terms of proximal, psychological causes, such as a tendency to have positive associations with people who share traits with us. Here we investigate whether homophily could be correlated with perceived group membership, given that sharing traits with other people might signify membership of a specific community. In order to investigate this, we tested whether the amount of homophily that occurs between strangers is dependent on the number of people they believe share the common trait (i.e. the size of group that the trait identifies). In two experiments, we show that more exclusive (smaller) groups evoke more positive ratings of the likeability of a stranger. When groups appear to be too inclusive (i.e. large) homophily no longer occurs, suggesting that it is not only positive associations with a trait that cause homophily, but a sense of the exclusiveness of a group is also important. These results suggest that group membership based on a variety of traits can encourage cohesion between people from diverse backgrounds, and may be a useful tool in overcoming differences between groups.

  16. Prediction of the human response time with the similarity and quantity of information

    International Nuclear Information System (INIS)

    Lee, Sungjin; Heo, Gyunyoung; Chang, Soon Heung

    2006-01-01

    Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  17. Vere-Jones' self-similar branching model

    International Nuclear Information System (INIS)

    Saichev, A.; Sornette, D.

    2005-01-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m ' of daughters of first-generation of a mother of magnitude m has two branches m ' ' >m with exponent β+d, where β and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m ' ' >m with exponent β+h, with h=d√(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents β,d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of

  18. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  19. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  20. A canopy-type similarity model for wind farm optimization

    Science.gov (United States)

    Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando

    2013-04-01

    The atmospheric boundary layer (ABL) flow through and over wind farms has been found to be similar to canopy-type flows, with characteristic flow development and shear penetration length scales (Markfort et al., 2012). Wind farms capture momentum from the ABL both at the leading edge and from above. We examine this further with an analytical canopy-type model. Within the flow development region, momentum is advected into the wind farm and wake turbulence draws excess momentum in from between turbines. This spatial heterogeneity of momentum within the wind farm is characterized by large dispersive momentum fluxes. Once the flow within the farm is developed, the area-averaged velocity profile exhibits a characteristic inflection point near the top of the wind farm, similar to that of canopy-type flows. The inflected velocity profile is associated with the presence of a dominant characteristic turbulence scale, which may be responsible for a significant portion of the vertical momentum flux. Prediction of this scale is useful for determining the amount of available power for harvesting. The new model is tested with results from wind tunnel experiments, which were conducted to characterize the turbulent flow in and above model wind farms in aligned and staggered configurations. The model is useful for representing wind farms in regional scale models, for the optimization of wind farms considering wind turbine spacing and layout configuration, and for assessing the impacts of upwind wind farms on nearby wind resources. Markfort CD, W Zhang and F Porté-Agel. 2012. Turbulent flow and scalar transport through and over aligned and staggered wind farms. Journal of Turbulence. 13(1) N33: 1-36. doi:10.1080/14685248.2012.709635.

  1. Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG.

    Science.gov (United States)

    Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A

    2016-05-15

    Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  3. MAC/FAC: A Model of Similarity-Based Retrieval.

    Science.gov (United States)

    Forbus, Kenneth D.; And Others

    1995-01-01

    Presents MAC/FAC, a model of similarity-based retrieval that attempts to capture psychological phenomena; discusses its limitations and extensions, its relationship with other retrieval models, and its placement in the context of other work on the nature of similarity. Examines the utility of the model through psychological experiments and…

  4. Molecular similarity-based predictions of the Tox21 screening outcome

    Directory of Open Access Journals (Sweden)

    Malgorzata Natalia Drwal

    2015-07-01

    Full Text Available To assess the toxicity of new chemicals and drugs, regulatory agencies require in vivo testing for many toxic endpoints, resulting in millions of animal experiments conducted each year. However, following the Replace, Reduce, Refine (3R principle, the development and optimization of alternative methods, in particular in silico methods, has been put into focus in the recent years. It is generally acknowledged that the more complex a toxic endpoint, the more difficult it is to model. Therefore, computational toxicology is shifting from modelling general and complex endpoints to the investigation and modelling of pathways of toxicity and the underlying molecular effects.The U.S. Toxicology in the 21st Century (Tox21 initiative has screened a large library of compounds, including approximately 10K environmental chemicals and drugs, for different mechanisms responsible for eliciting toxic effects, and made the results publicly available. Through the Tox21 Data Challenge, the consortium has established a platform for computational toxicologists to develop and validate their predictive models.Here, we present a fast and successful method for the prediction of different outcomes of the nuclear receptor and stress response pathway screening from the Tox21 Data Challenge 2014. The method is based on the combination of molecular similarity calculations and a naïve Bayes machine learning algorithm and has been implemented as a KNIME pipeline. Molecules are represented as binary vectors consisting of a concatenation of common two-dimensional molecular fingerprint types with topological compound properties. The prediction method has been optimized individually for each modelled target and evaluated in a cross-validation as well as with the independent Tox21 validation set. Our results show that the method can achieve good prediction accuracies and rank among the top algorithms submitted to the prediction challenge, indicating its broad applicability in

  5. Similarity-based prediction for Anatomical Therapeutic Chemical classification of drugs by integrating multiple data sources.

    Science.gov (United States)

    Liu, Zhongyang; Guo, Feifei; Gu, Jiangyong; Wang, Yong; Li, Yang; Wang, Dan; Lu, Liang; Li, Dong; He, Fuchu

    2015-06-01

    Anatomical Therapeutic Chemical (ATC) classification system, widely applied in almost all drug utilization studies, is currently the most widely recognized classification system for drugs. Currently, new drug entries are added into the system only on users' requests, which leads to seriously incomplete drug coverage of the system, and bioinformatics prediction is helpful during this process. Here we propose a novel prediction model of drug-ATC code associations, using logistic regression to integrate multiple heterogeneous data sources including chemical structures, target proteins, gene expression, side-effects and chemical-chemical associations. The model obtains good performance for the prediction not only on ATC codes of unclassified drugs but also on new ATC codes of classified drugs assessed by cross-validation and independent test sets, and its efficacy exceeds previous methods. Further to facilitate the use, the model is developed into a user-friendly web service SPACE ( S: imilarity-based P: redictor of A: TC C: od E: ), which for each submitted compound, will give candidate ATC codes (ranked according to the decreasing probability_score predicted by the model) together with corresponding supporting evidence. This work not only contributes to knowing drugs' therapeutic, pharmacological and chemical properties, but also provides clues for drug repositioning and side-effect discovery. In addition, the construction of the prediction model also provides a general framework for similarity-based data integration which is suitable for other drug-related studies such as target, side-effect prediction etc. The web service SPACE is available at http://www.bprc.ac.cn/space. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  7. Perceptual assimilation of French and German vowels by American English monolinguals: Acoustic similarity does not predict perceptual similarity

    Science.gov (United States)

    Strange, Winifred; Levy, Erika; Lehnholf, Robert

    2004-05-01

    Previous research in our laboratory has demonstrated that the perceived similarity of vowels across languages is not always predictable from the closeness of their target formant values in F1/F2/F3 space. In this study, perceptual similarity was established using a task in which 11 American English (AE) monolinguals were presented multiple tokens of 9 French vowels and 14 North German vowels (in separate blocks) produced in citation-form /hVb(a)/ (bi)syllables by native speakers. They selected 1 of 11 AE vowel responses to which each non-native vowel token was most similar, and rated its goodness on a 9-point Likert scale. Of special interest was the perceptual assimilation of front rounded French [y, oe] and German [y, Y, o/, oe] vowels. Acoustically, all six French and German vowels are more similar to front unrounded AE vowels. However, all six vowels were perceived to be more similar to back rounded AE vowels (range across vowels = 55% to 100%), although relatively poor exemplars. There were differences across languages in how the same vowel was assimilated (e.g., French /y/ assimilated to front AE vowels 13%, German /y/, 0% French [oe] 3%, German [oe] 45%). There were also large individual differences in listeners assimilation patterns. [Work supported by NIDCD.

  8. Applicability of the theory of thermodynamic similarity to predict the enthalpies of vaporization of aliphatic aldehydes

    Science.gov (United States)

    Esina, Z. N.; Korchuganova, M. R.

    2015-06-01

    The theory of thermodynamic similarity is used to predict the enthalpies of vaporization of aliphatic aldehydes. The predicted data allow us to calculate the phase diagrams of liquid-vapor equilibrium in a binary water-aliphatic aldehyde system.

  9. QSAR and docking studies of anthraquinone derivatives by similarity cluster prediction.

    Science.gov (United States)

    Harsa, Alexandra M; Harsa, Teodora E; Diudea, Mircea V

    2016-01-01

    Forty anthraquinone derivatives have been downloaded from PubChem database and investigated in a quantitative structure-activity relationships (QSAR) study. The models describing log P and LD50 of this set were built up on the hypermolecule scheme that mimics the investigated receptor space; the models were validated by the leave-one-out procedure, in the external test set and in a new version of prediction by using similarity clusters. Molecular docking approach using Lamarckian Genetic Algorithm was made on this class of anthraquinones with respect to 3Q3B receptor. The best scored molecules in the docking assay were used as leaders in the similarity clustering procedure. It is demonstrated that the LD50 data of this set of anthraquinones are related to the binding energies of anthraquinone ligands to the 3Q3B receptor.

  10. Prediction of drug-target interactions for drug repositioning only based on genomic expression similarity.

    Directory of Open Access Journals (Sweden)

    Kejian Wang

    Full Text Available Small drug molecules usually bind to multiple protein targets or even unintended off-targets. Such drug promiscuity has often led to unwanted or unexplained drug reactions, resulting in side effects or drug repositioning opportunities. So it is always an important issue in pharmacology to identify potential drug-target interactions (DTI. However, DTI discovery by experiment remains a challenging task, due to high expense of time and resources. Many computational methods are therefore developed to predict DTI with high throughput biological and clinical data. Here, we initiatively demonstrate that the on-target and off-target effects could be characterized by drug-induced in vitro genomic expression changes, e.g. the data in Connectivity Map (CMap. Thus, unknown ligands of a certain target can be found from the compounds showing high gene-expression similarity to the known ligands. Then to clarify the detailed practice of CMap based DTI prediction, we objectively evaluate how well each target is characterized by CMap. The results suggest that (1 some targets are better characterized than others, so the prediction models specific to these well characterized targets would be more accurate and reliable; (2 in some cases, a family of ligands for the same target tend to interact with common off-targets, which may help increase the efficiency of DTI discovery and explain the mechanisms of complicated drug actions. In the present study, CMap expression similarity is proposed as a novel indicator of drug-target interactions. The detailed strategies of improving data quality by decreasing the batch effect and building prediction models are also effectively established. We believe the success in CMap can be further translated into other public and commercial data of genomic expression, thus increasing research productivity towards valid drug repositioning and minimal side effects.

  11. Simulation and similarity using models to understand the world

    CERN Document Server

    Weisberg, Michael

    2013-01-01

    In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...

  12. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  13. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  14. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  15. Hippocampal Mismatch Signals Are Modulated by the Strength of Neural Predictions and Their Similarity to Outcomes.

    Science.gov (United States)

    Long, Nicole M; Lee, Hongmi; Kuhl, Brice A

    2016-12-14

    The hippocampus is thought to compare predicted events with current perceptual input, generating a mismatch signal when predictions are violated. However, most prior studies have only inferred when predictions occur without measuring them directly. Moreover, an important but unresolved question is whether hippocampal mismatch signals are modulated by the degree to which predictions differ from outcomes. Here, we conducted a human fMRI study in which subjects repeatedly studied various word-picture pairs, learning to predict particular pictures (outcomes) from the words (cues). After initial learning, a subset of cues was paired with a novel, unexpected outcome, whereas other cues continued to predict the same outcome. Critically, when outcomes changed, the new outcome was either "near" to the predicted outcome (same visual category as the predicted picture) or "far" from the predicted outcome (different visual category). Using multivoxel pattern analysis, we indexed cue-evoked reactivation (prediction) within neocortical areas and related these trial-by-trial measures of prediction strength to univariate hippocampal responses to the outcomes. We found that prediction strength positively modulated hippocampal responses to unexpected outcomes, particularly when unexpected outcomes were close, but not identical, to the prediction. Hippocampal responses to unexpected outcomes were also associated with a tradeoff in performance during a subsequent memory test: relatively faster retrieval of new (updated) associations, but relatively slower retrieval of the original (older) associations. Together, these results indicate that hippocampal mismatch signals reflect a comparison between active predictions and current outcomes and that these signals are most robust when predictions are similar, but not identical, to outcomes. Although the hippocampus is widely thought to signal "mismatches" between memory-based predictions and outcomes, previous research has not linked

  16. Bianchi VI0 and III models: self-similar approach

    International Nuclear Information System (INIS)

    Belinchon, Jose Antonio

    2009-01-01

    We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  17. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  18. Structural similarity and descriptor spaces for clustering and development of QSAR models.

    Science.gov (United States)

    Ruiz, Irene Luque; García, Gonzalo Cerruela; Gómez-Nieto, Miguel Angel

    2013-06-01

    In this paper we study and analyze the behavior of different representational spaces for the clustering and building of QSAR models. Representational spaces based on fingerprint similarity, structural similarity using maximum common subgraphs (MCS) and all maximum common subgraphs (AMCS) approaches are compared against representational spaces based on structural fragments and non-isomorphic fragments (NIF), built using different molecular descriptors. Algorithms for extraction of MCS, AMCS and NIF are described and support vector machine is used for the classification of a dataset corresponding with 74 compounds of 1,4-benzoquinone derivatives. Molecular descriptors are tested in order to build QSAR models for the prediction of the antifungal activity of the dataset. Descriptors based on the consideration of graph connectivity and distances are the most appropriate for building QSAR models. Moreover, models based on approximate similarity improve the statistical of the equations thanks to combining structural similarity, nonisomorphic fragments and descriptors approaches for the creation of more robust and finer prediction equations.

  19. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  20. Predicting drug-drug interactions through drug structural similarities and interaction networks incorporating pharmacokinetics and pharmacodynamics knowledge.

    Science.gov (United States)

    Takeda, Takako; Hao, Ming; Cheng, Tiejun; Bryant, Stephen H; Wang, Yanli

    2017-01-01

    Drug-drug interactions (DDIs) may lead to adverse effects and potentially result in drug withdrawal from the market. Predicting DDIs during drug development would help reduce development costs and time by rigorous evaluation of drug candidates. The primary mechanisms of DDIs are based on pharmacokinetics (PK) and pharmacodynamics (PD). This study examines the effects of 2D structural similarities of drugs on DDI prediction through interaction networks including both PD and PK knowledge. Our assumption was that a query drug (Dq) and a drug to be examined (De) likely have DDI if the drugs in the interaction network of De are structurally similar to Dq. A network of De describes the associations between the drugs and the proteins relating to PK and PD for De. These include target proteins, proteins interacting with target proteins, enzymes, and transporters for De. We constructed logistic regression models for DDI prediction using only 2D structural similarities between each Dq and the drugs in the network of De. The results indicated that our models could effectively predict DDIs. It was found that integrating structural similarity scores of the drugs relating to both PK and PD of De was crucial for model performance. In particular, the combination of the target- and enzyme-related scores provided the largest increase of the predictive power.Graphical abstract.

  1. Prospective evaluation of shape similarity based pose prediction method in D3R Grand Challenge 2015.

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y J

    2016-09-01

    Evaluation of ligand three-dimensional (3D) shape similarity is one of the commonly used approaches to identify ligands similar to one or more known active compounds from a library of small molecules. Apart from using ligand shape similarity as a virtual screening tool, its role in pose prediction and pose scoring has also been reported. We have recently developed a method that utilizes ligand 3D shape similarity with known crystallographic ligands to predict binding poses of query ligands. Here, we report the prospective evaluation of our pose prediction method through the participation in drug design data resource (D3R) Grand Challenge 2015. Our pose prediction method was used to predict binding poses of heat shock protein 90 (HSP90) and mitogen activated protein kinase kinase kinase kinase (MAP4K4) ligands and it was able to predict the pose within 2 Å root mean square deviation (RMSD) either as the top pose or among the best of five poses in a majority of cases. Specifically for HSP90 protein, a median RMSD of 0.73 and 0.68 Å was obtained for the top and the best of five predictions respectively. For MAP4K4 target, although the median RMSD for our top prediction was only 2.87 Å but the median RMSD of 1.67 Å for the best of five predictions was well within the limit for successful prediction. Furthermore, the performance of our pose prediction method for HSP90 and MAP4K4 ligands was always among the top five groups. Particularly, for MAP4K4 protein our pose prediction method was ranked number one both in terms of mean and median RMSD when the best of five predictions were considered. Overall, our D3R Grand Challenge 2015 results demonstrated that ligand 3D shape similarity with the crystal ligand is sufficient to predict binding poses of new ligands with acceptable accuracy.

  2. An approach to predict water quality in data-sparse catchments using hydrological catchment similarity

    Science.gov (United States)

    Pohle, Ina; Glendell, Miriam; Stutter, Marc I.; Helliwell, Rachel C.

    2017-04-01

    An understanding of catchment response to climate and land use change at a regional scale is necessary for the assessment of mitigation and adaptation options addressing diffuse nutrient pollution. It is well documented that the physicochemical properties of a river ecosystem respond to change in a non-linear fashion. This is particularly important when threshold water concentrations, relevant to national and EU legislation, are exceeded. Large scale (regional) model assessments required for regulatory purposes must represent the key processes and mechanisms that are more readily understood in catchments with water quantity and water quality data monitored at high spatial and temporal resolution. While daily discharge data are available for most catchments in Scotland, nitrate and phosphorus are mostly available on a monthly basis only, as typified by regulatory monitoring. However, high resolution (hourly to daily) water quantity and water quality data exist for a limited number of research catchments. To successfully implement adaptation measures across Scotland, an upscaling from data-rich to data-sparse catchments is required. In addition, the widespread availability of spatial datasets affecting hydrological and biogeochemical responses (e.g. soils, topography/geomorphology, land use, vegetation etc.) provide an opportunity to transfer predictions between data-rich and data-sparse areas by linking processes and responses to catchment attributes. Here, we develop a framework of catchment typologies as a prerequisite for transferring information from data-rich to data-sparse catchments by focusing on how hydrological catchment similarity can be used as an indicator of grouped behaviours in water quality response. As indicators of hydrological catchment similarity we use flow indices derived from observed discharge data across Scotland as well as hydrological model parameters. For the latter, we calibrated the lumped rainfall-runoff model TUWModel using multiple

  3. Conceptual similarities and common predictive approaches for SCC in high temperature water systems

    International Nuclear Information System (INIS)

    This paper addresses the many conceptual similarities that exist among structural materials subject to environmentally assisted cracking in high temperature water. While cracking has been viewed as a highly mechanistically and phenomenologically unique process for every material, temperature, environment, loading, etc., there is an increasingly strong basis for treating environmental crack advance processes of ductile alloys in hot water from a common mechanistic and predictive perspective. This paper addresses the roles of various materials (ranging from low alloy and carbon steels, to stainless steels, to high nickel alloys), water chemistries (e.g., including various BWR and PWR conditions), temperature (from 360 C), irradiation, etc. on the crack advance process. Viewed from the perspective of the crack tip system, differences once perceived as large (e.g., in corrosion potential for BWRs vs. PWRs) are now recognized as relatively small (e.g., crack advance always occurs at low potential associated with deaerated water because of oxygen depletion in the crack). Additionally, since these materials rely on good passivity, and since creep increases with temperature, the importance to crack advance of film rupture and metal dissolution/repassivation is common to all of these cracking systems. While unique aspects must be acknowledged and modeled for specific materials (e.g., MnS dissolution in low alloy steels, thermal sensitization, irradiation effects) and specific water chemistries (e.g., effects of high sulfide levels, occluded chemistries, nickel metal stability at high H 2 fugacity), the recognition of the broad similarities and the existence of a common underlying framework leads to a more complete understanding of and predictive approaches for environmentally assisted cracking in high temperature water

  4. Predicting protein-protein interface residues using local surface structural similarity

    Directory of Open Access Journals (Sweden)

    Jordan Rafael A

    2012-03-01

    Full Text Available Abstract Background Identification of the residues in protein-protein interaction sites has a significant impact in problems such as drug discovery. Motivated by the observation that the set of interface residues of a protein tend to be conserved even among remote structural homologs, we introduce PrISE, a family of local structural similarity-based computational methods for predicting protein-protein interface residues. Results We present a novel representation of the surface residues of a protein in the form of structural elements. Each structural element consists of a central residue and its surface neighbors. The PrISE family of interface prediction methods uses a representation of structural elements that captures the atomic composition and accessible surface area of the residues that make up each structural element. Each of the members of the PrISE methods identifies for each structural element in the query protein, a collection of similar structural elements in its repository of structural elements and weights them according to their similarity with the structural element of the query protein. PrISEL relies on the similarity between structural elements (i.e. local structural similarity. PrISEG relies on the similarity between protein surfaces (i.e. general structural similarity. PrISEC, combines local structural similarity and general structural similarity to predict interface residues. These predictors label the central residue of a structural element in a query protein as an interface residue if a weighted majority of the structural elements that are similar to it are interface residues, and as a non-interface residue otherwise. The results of our experiments using three representative benchmark datasets show that the PrISEC outperforms PrISEL and PrISEG; and that PrISEC is highly competitive with state-of-the-art structure-based methods for predicting protein-protein interface residues. Our comparison of PrISEC with PredUs, a recently

  5. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  6. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    Directory of Open Access Journals (Sweden)

    Mikhail A. Goncharov

    2010-01-01

    Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6

  7. DAMA and the self-similar infall halo model

    International Nuclear Information System (INIS)

    Natarajan, Aravind

    2011-01-01

    The annual modulation in the rate of weakly interacting massive particle (WIMP) recoils observed by the DAMA Collaboration at high significance is often analyzed in the context of an isothermal Maxwell-Boltzmann velocity distribution. While this is the simplest model, there is a need to consider other well motivated theories of halo formation. In this paper, we study a different halo model, that of self-similar infall which is characterized by the presence of a number of cold streams and caustics, not seen in simulations. It is shown that the self-similar infall model is consistent with the DAMA result both in amplitude and in phase, for WIMP masses exceeding ≅250 GeV at the 99.7% confidence level. Adding a small thermal component makes the parameter space near m χ =12 GeV consistent with the self-similar model. The minimum χ 2 per degree of freedom is found to be 0.92(1.03) with(without) channeling taken into account, indicating an acceptable fit. For WIMP masses much greater than the mass of the target nucleus, the recoil rate depends only on the ratio σ p /m χ which is found to be ≅0.06 femtobarn/TeV. However, as in the case of the isothermal halo, the allowed parameter space is inconsistent with the null result obtained by the CDMS and XENON experiments for spin-independent elastic scattering. Future experiments with directional sensitivity and mass bounds from accelerator experiments will help to distinguish between different halo models and/or constrain the contribution from cold flows.

  8. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  9. Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms.

    Science.gov (United States)

    Uehara, Shintaro; Mawase, Firas; Celnik, Pablo

    2017-09-14

    Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Functional enrichment analyses and construction of functional similarity networks with high confidence function prediction by PFP

    Directory of Open Access Journals (Sweden)

    Kihara Daisuke

    2010-05-01

    Full Text Available Abstract Background A new paradigm of biological investigation takes advantage of technologies that produce large high throughput datasets, including genome sequences, interactions of proteins, and gene expression. The ability of biologists to analyze and interpret such data relies on functional annotation of the included proteins, but even in highly characterized organisms many proteins can lack the functional evidence necessary to infer their biological relevance. Results Here we have applied high confidence function predictions from our automated prediction system, PFP, to three genome sequences, Escherichia coli, Saccharomyces cerevisiae, and Plasmodium falciparum (malaria. The number of annotated genes is increased by PFP to over 90% for all of the genomes. Using the large coverage of the function annotation, we introduced the functional similarity networks which represent the functional space of the proteomes. Four different functional similarity networks are constructed for each proteome, one each by considering similarity in a single Gene Ontology (GO category, i.e. Biological Process, Cellular Component, and Molecular Function, and another one by considering overall similarity with the funSim score. The functional similarity networks are shown to have higher modularity than the protein-protein interaction network. Moreover, the funSim score network is distinct from the single GO-score networks by showing a higher clustering degree exponent value and thus has a higher tendency to be hierarchical. In addition, examining function assignments to the protein-protein interaction network and local regions of genomes has identified numerous cases where subnetworks or local regions have functionally coherent proteins. These results will help interpreting interactions of proteins and gene orders in a genome. Several examples of both analyses are highlighted. Conclusion The analyses demonstrate that applying high confidence predictions from PFP

  11. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  12. Towards Modelling Variation in Music as Foundation for Similarity

    NARCIS (Netherlands)

    Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to

  13. Large-scale structural and textual similarity-based mining of knowledge graph to predict drug-drug interactions

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-12

    Drug-Drug Interactions (DDIs) are a major cause of preventable Adverse Drug Reactions (ADRs), causing a significant burden on the patients’ health and the healthcare system. It is widely known that clinical studies cannot sufficiently and accurately identify DDIs for new drugs before they are made available on the market. In addition, existing public and proprietary sources of DDI information are known to be incomplete and/or inaccurate and so not reliable. As a result, there is an emerging body of research on in-silico prediction of drug-drug interactions. In this paper, we present Tiresias, a large-scale similarity-based framework that predicts DDIs through link prediction. Tiresias takes in various sources of drug-related data and knowledge as inputs, and provides DDI predictions as outputs. The process starts with semantic integration of the input data that results in a knowledge graph describing drug attributes and relationships with various related entities such as enzymes, chemical structures, and pathways. The knowledge graph is then used to compute several similarity measures between all the drugs in a scalable and distributed framework. In particular, Tiresias utilizes two classes of features in a knowledge graph: local and global features. Local features are derived from the information directly associated to each drug (i.e., one hop away) while global features are learnt by minimizing a global loss function that considers the complete structure of the knowledge graph. The resulting similarity metrics are used to build features for a large-scale logistic regression model to predict potential DDIs. We highlight the novelty of our proposed Tiresias and perform thorough evaluation of the quality of the predictions. The results show the effectiveness of Tiresias in both predicting new interactions among existing drugs as well as newly developed drugs.

  14. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  15. Spatially pooled contrast responses predict neural and perceptual similarity of naturalistic image categories.

    Directory of Open Access Journals (Sweden)

    Iris I A Groen

    Full Text Available The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis. Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.

  16. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  17. Environmental niche models for riverine desert fishes and their similarity according to phylogeny and functionality

    Science.gov (United States)

    Whitney, James E.; Whittier, Joanna B.; Paukert, Craig

    2017-01-01

    Environmental filtering and competitive exclusion are hypotheses frequently invoked in explaining species' environmental niches (i.e., geographic distributions). A key assumption in both hypotheses is that the functional niche (i.e., species traits) governs the environmental niche, but few studies have rigorously evaluated this assumption. Furthermore, phylogeny could be associated with these hypotheses if it is predictive of functional niche similarity via phylogenetic signal or convergent evolution, or of environmental niche similarity through phylogenetic attraction or repulsion. The objectives of this study were to investigate relationships between environmental niches, functional niches, and phylogenies of fishes of the Upper (UCRB) and Lower (LCRB) Colorado River Basins of southwestern North America. We predicted that functionally similar species would have similar environmental niches (i.e., environmental filtering) and that closely related species would be functionally similar (i.e., phylogenetic signal) and possess similar environmental niches (i.e., phylogenetic attraction). Environmental niches were quantified using environmental niche modeling, and functional similarity was determined using functional trait data. Nonnatives in the UCRB provided the only support for environmental filtering, which resulted from several warmwater nonnatives having dam number as a common predictor of their distributions, whereas several cool- and coldwater nonnatives shared mean annual air temperature as an important distributional predictor. Phylogenetic signal was supported for both natives and nonnatives in both basins. Lastly, phylogenetic attraction was only supported for native fishes in the LCRB and for nonnative fishes in the UCRB. Our results indicated that functional similarity was heavily influenced by evolutionary history, but that phylogenetic relationships and functional traits may not always predict the environmental distribution of species. However, the

  18. Visual reconciliation of alternative similarity spaces in climate modeling

    Science.gov (United States)

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  19. Self-similar solution for coupled thermal electromagnetic model ...

    African Journals Online (AJOL)

    An investigation into the existence and uniqueness solution of self-similar solution for the coupled Maxwell and Pennes Bio-heat equations have been done. Criteria for existence and uniqueness of self-similar solution are revealed in the consequent theorems. Journal of the Nigerian Association of Mathematical Physics ...

  20. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  1. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  2. Cross-language acoustic similarity predicts perceptual assimilation of Canadian English and Canadian French vowels.

    Science.gov (United States)

    Escudero, Paola; Vasiliev, Polina

    2011-11-01

    Monolingual Peruvian Spanish listeners identified natural tokens of the Canadian French (CF) and Canadian English (CE) /ɛ/ and /æ/, produced in five consonantal contexts. The results demonstrate that while the CF vowels were mapped to two different native vowels, /e/ and /a/, in all consonantal contexts, the CE contrast was mapped to the single native vowel /a/ in four out of five contexts. Linear discriminant analysis revealed that acoustic similarity between native and target language vowels was a very good predictor of context-specific perceptual mappings. Predictions are made for Spanish learners of the /ɛ/-/æ/ contrast in CF and CE.

  3. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  4. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  5. Quality assessment of protein model-structures based on structural and functional similarities.

    Science.gov (United States)

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  6. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  7. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  8. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.

    Science.gov (United States)

    Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl

    2015-08-01

    Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average

  9. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  11. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  12. Similarities between obesity in pets and children : the addiction model

    NARCIS (Netherlands)

    Pretlow, Robert A; Corbee, Ronald J

    2016-01-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest

  13. Differences and similarities in breast cancer risk assessment models in clinical practice : which model to choose?

    NARCIS (Netherlands)

    Jacobi, Catharina E.; de Bock, Geertruida H.; Siegerink, Bob; van Asperen, Christi J.

    To show differences and similarities between risk estimation models for breast cancer in healthy women from BRCA1/2-negative or untested families. After a systematic literature search seven models were selected: Gail-2, Claus Model, Claus Tables, BOADICEA, Jonker Model, Claus-Extended Formula, and

  14. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  15. Do self-report and medical record comorbidity data predict longitudinal functional capacity and quality of life health outcomes similarly?

    Science.gov (United States)

    Olomu, Adesuwa B; Corser, William D; Stommel, Manfred; Xie, Yan; Holmes-Rovner, Margaret

    2012-11-14

    The search for a reliable, valid and cost-effective comorbidity risk adjustment method for outcomes research continues to be a challenge. The most widely used tool, the Charlson Comorbidity Index (CCI) is limited due to frequent missing data in medical records and administrative data. Patient self-report data has the potential to be more complete but has not been widely used. The purpose of this study was to evaluate the performance of the Self-Administered Comorbidity Questionnaire (SCQ) to predict functional capacity, quality of life (QOL) health outcomes compared to CCI medical records data. An SCQ-score was generated from patient interview, and the CCI score was generated by medical record review for 525 patients hospitalized for Acute Coronary Syndrome (ACS) at baseline, three months and eight months post-discharge. Linear regression models assessed the extent to which there were differences in the ability of comorbidity measures to predict functional capacity (Activity Status Index [ASI] scores) and quality of life (EuroQOL 5D [EQ5D] scores). The CCI (R2 = 0.245; p = 0.132) did not predict quality of life scores while the SCQ self-report method (R2 = 0.265; p < 0.0005) predicted the EQ5D scores. However, the CCI was almost as good as the SCQ for predicting the ASI scores at three and six months and performed slightly better in predicting ASI at eight-month follow up (R2 = 0.370; p < 0.0005 vs. R2 = 0.358; p < 0.0005) respectively. Only age, gender, family income and Center for Epidemiologic Studies-Depression (CESD) scores showed significant association with both measures in predicting QOL and functional capacity. Although our model R-squares were fairly low, these results show that the self-report SCQ index is a good alternative method to predict QOL health outcomes when compared to a CCI medical record score. Both measures predicted physical functioning similarly. This suggests that patient self-reported comorbidity data can be used for predicting physical

  16. Do self-report and medical record comorbidity data predict longitudinal functional capacity and quality of life health outcomes similarly?

    Directory of Open Access Journals (Sweden)

    Olomu Adesuwa B

    2012-11-01

    Full Text Available Abstract Background The search for a reliable, valid and cost-effective comorbidity risk adjustment method for outcomes research continues to be a challenge. The most widely used tool, the Charlson Comorbidity Index (CCI is limited due to frequent missing data in medical records and administrative data. Patient self-report data has the potential to be more complete but has not been widely used. The purpose of this study was to evaluate the performance of the Self-Administered Comorbidity Questionnaire (SCQ to predict functional capacity, quality of life (QOL health outcomes compared to CCI medical records data. Method An SCQ-score was generated from patient interview, and the CCI score was generated by medical record review for 525 patients hospitalized for Acute Coronary Syndrome (ACS at baseline, three months and eight months post-discharge. Linear regression models assessed the extent to which there were differences in the ability of comorbidity measures to predict functional capacity (Activity Status Index [ASI] scores and quality of life (EuroQOL 5D [EQ5D] scores. Results The CCI (R2 = 0.245; p = 0.132 did not predict quality of life scores while the SCQ self-report method (R2 = 0.265; p 2 = 0.370; p 2 = 0.358; p Conclusions Although our model R-squares were fairly low, these results show that the self-report SCQ index is a good alternative method to predict QOL health outcomes when compared to a CCI medical record score. Both measures predicted physical functioning similarly. This suggests that patient self-reported comorbidity data can be used for predicting physical functional capacity and QOL and can serve as a reliable risk adjustment measure. Self-report comorbidity data may provide a cost-effective alternative method for risk adjustment in clinical research, health policy and organizational improvement analyses. Trial registration Clinical Trials.gov NCT00416026

  17. Self-similar Gaussian processes for modeling anomalous diffusion

    Science.gov (United States)

    Lim, S. C.; Muniandy, S. V.

    2002-08-01

    We study some Gaussian models for anomalous diffusion, which include the time-rescaled Brownian motion, two types of fractional Brownian motion, and models associated with fractional Brownian motion based on the generalized Langevin equation. Gaussian processes associated with these models satisfy the anomalous diffusion relation which requires the mean-square displacement to vary with tα, 0Brownian motion and time-rescaled Brownian motion all have the same probability distribution function, the Slepian theorem can be used to compare their first passage time distributions, which are different. Finally, in order to model anomalous diffusion with a variable exponent α(t) it is necessary to consider the multifractional extensions of these Gaussian processes.

  18. Similarity conditions for investigations of hydraulic-thermal tidal models

    International Nuclear Information System (INIS)

    Fluegge, G.; Schwarze, H.

    1975-01-01

    With the construction of nuclear power plants near German tidal estuaries in mind, investigations of mixing and spreading processes which occur during the discharge of heated cooling water in tidal waters were carried out in hydraulic-thermal tidal models of the Lower Weser and Lower Elbe by the Franzius Institute for hydraulic and coastal engineering of the Technical University Hannover. This contribution discusses in detail the problems met and the experience gained in constructing and operating these models. (orig./TK) [de

  19. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    Science.gov (United States)

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Similarity analysis and prediction for data of structural acoustic and vibration

    International Nuclear Information System (INIS)

    Mei Liquan; Ding Xuemei; Zhang Shujuan

    2010-01-01

    Support vector machine (SVM) is a learning machine based on statistical learning theory, which can get a model having good generalization. It can solve 'learning more' when dealing with small size. It can also avoid 'dimensional disaster' when solving nonlinear problems. This paper works on the parameters optimization for support vector regression machine (SVRM) and its applications. Solution path algorithm can save much CPU time when it is employed to optimize the regularization parameter of SVRM. Simulated annealing algorithm has good ability of finding global optimal solution. An improved solution path algorithm and simulated annealing algorithm are combined to optimize parameters of SVRM in the regression analysis of the acoustic and vibration data for complex practical problems. The numerical results show the model has good predictive capability. (authors)

  1. Fast structure similarity searches among protein models: efficient clustering of protein fragments

    Directory of Open Access Journals (Sweden)

    Fogolari Federico

    2012-05-01

    Full Text Available Abstract Background For many predictive applications a large number of models is generated and later clustered in subsets based on structure similarity. In most clustering algorithms an all-vs-all root mean square deviation (RMSD comparison is performed. Most of the time is typically spent on comparison of non-similar structures. For sets with more than, say, 10,000 models this procedure is very time-consuming and alternative faster algorithms, restricting comparisons only to most similar structures would be useful. Results We exploit the inverse triangle inequality on the RMSD between two structures given the RMSDs with a third structure. The lower bound on RMSD may be used, when restricting the search of similarity to a reasonably low RMSD threshold value, to speed up similarity searches significantly. Tests are performed on large sets of decoys which are widely used as test cases for predictive methods, with a speed-up of up to 100 times with respect to all-vs-all comparison depending on the set and parameters used. Sample applications are shown. Conclusions The algorithm presented here allows fast comparison of large data sets of structures with limited memory requirements. As an example of application we present clustering of more than 100000 fragments of length 5 from the top500H dataset into few hundred representative fragments. A more realistic scenario is provided by the search of similarity within the very large decoy sets used for the tests. Other applications regard filtering nearly-indentical conformation in selected CASP9 datasets and clustering molecular dynamics snapshots. Availability A linux executable and a Perl script with examples are given in the supplementary material (Additional file 1. The source code is available upon request from the authors.

  2. Similarities between obesity in pets and children: the addiction model.

    Science.gov (United States)

    Pretlow, Robert A; Corbee, Ronald J

    2016-09-01

    Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.

  3. Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction

    Directory of Open Access Journals (Sweden)

    Geoff Boeing

    2016-11-01

    Full Text Available Nearly all nontrivial real-world systems are nonlinear dynamical systems. Chaos describes certain nonlinear dynamical systems that have a very sensitive dependence on initial conditions. Chaotic systems are always deterministic and may be very simple, yet they produce completely unpredictable and divergent behavior. Systems of nonlinear equations are difficult to solve analytically, and scientists have relied heavily on visual and qualitative approaches to discover and analyze the dynamics of nonlinearity. Indeed, few fields have drawn as heavily from visualization methods for their seminal innovations: from strange attractors, to bifurcation diagrams, to cobweb plots, to phase diagrams and embedding. Although the social sciences are increasingly studying these types of systems, seminal concepts remain murky or loosely adopted. This article has three aims. First, it argues for several visualization methods to critically analyze and understand the behavior of nonlinear dynamical systems. Second, it uses these visualizations to introduce the foundations of nonlinear dynamics, chaos, fractals, self-similarity and the limits of prediction. Finally, it presents Pynamical, an open-source Python package to easily visualize and explore nonlinear dynamical systems’ behavior.

  4. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  5. Predicting behavior change from persuasive messages using neural representational similarity and social network analyses.

    Science.gov (United States)

    Pegors, Teresa K; Tompson, Steven; O'Donnell, Matthew Brook; Falk, Emily B

    2017-08-15

    Neural activity in medial prefrontal cortex (MPFC), identified as engaging in self-related processing, predicts later health behavior change. However, it is unknown to what extent individual differences in neural representation of content and lived experience influence this brain-behavior relationship. We examined whether the strength of content-specific representations during persuasive messaging relates to later behavior change, and whether these relationships change as a function of individuals' social network composition. In our study, smokers viewed anti-smoking messages while undergoing fMRI and we measured changes in their smoking behavior one month later. Using representational similarity analyses, we found that the degree to which message content (i.e. health, social, or valence information) was represented in a self-related processing MPFC region was associated with later smoking behavior, with increased representations of negatively valenced (risk) information corresponding to greater message-consistent behavior change. Furthermore, the relationship between representations and behavior change depended on social network composition: smokers who had proportionally fewer smokers in their network showed increases in smoking behavior when social or health content was strongly represented in MPFC, whereas message-consistent behavior (i.e., less smoking) was more likely for those with proportionally more smokers in their social network who represented social or health consequences more strongly. These results highlight the dynamic relationship between representations in MPFC and key outcomes such as health behavior change; a complete understanding of the role of MPFC in motivation and action should take into account individual differences in neural representation of stimulus attributes and social context variables such as social network composition. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Predicting Anatomical Therapeutic Chemical (ATC) Classification of Drugs by Integrating Chemical-Chemical Interactions and Similarities

    Science.gov (United States)

    Chen, Lei; Zeng, Wei-Ming; Cai, Yu-Dong; Feng, Kai-Yan; Chou, Kuo-Chen

    2012-01-01

    The Anatomical Therapeutic Chemical (ATC) classification system, recommended by the World Health Organization, categories drugs into different classes according to their therapeutic and chemical characteristics. For a set of query compounds, how can we identify which ATC-class (or classes) they belong to? It is an important and challenging problem because the information thus obtained would be quite useful for drug development and utilization. By hybridizing the informations of chemical-chemical interactions and chemical-chemical similarities, a novel method was developed for such purpose. It was observed by the jackknife test on a benchmark dataset of 3,883 drug compounds that the overall success rate achieved by the prediction method was about 73% in identifying the drugs among the following 14 main ATC-classes: (1) alimentary tract and metabolism; (2) blood and blood forming organs; (3) cardiovascular system; (4) dermatologicals; (5) genitourinary system and sex hormones; (6) systemic hormonal preparations, excluding sex hormones and insulins; (7) anti-infectives for systemic use; (8) antineoplastic and immunomodulating agents; (9) musculoskeletal system; (10) nervous system; (11) antiparasitic products, insecticides and repellents; (12) respiratory system; (13) sensory organs; (14) various. Such a success rate is substantially higher than 7% by the random guess. It has not escaped our notice that the current method can be straightforwardly extended to identify the drugs for their 2nd-level, 3rd-level, 4th-level, and 5th-level ATC-classifications once the statistically significant benchmark data are available for these lower levels. PMID:22514724

  7. Short-Term Power Forecasting Model for Photovoltaic Plants Based on Historical Similarity

    Directory of Open Access Journals (Sweden)

    M. Sonia Terreros-Olarte

    2013-05-01

    Full Text Available This paper proposes a new model for short-term forecasting of electric energy production in a photovoltaic (PV plant. The model is called HIstorical SImilar MIning (HISIMI model; its final structure is optimized by using a genetic algorithm, based on data mining techniques applied to historical cases composed by past forecasted values of weather variables, obtained from numerical tools for weather prediction, and by past production of electric power in a PV plant. The HISIMI model is able to supply spot values of power forecasts, and also the uncertainty, or probabilities, associated with those spot values, providing new useful information to users with respect to traditional forecasting models for PV plants. Such probabilities enable analysis and evaluation of risk associated with those spot forecasts, for example, in offers of energy sale for electricity markets. The results of spot forecasting of an illustrative example obtained with the HISIMI model for a real-life grid-connected PV plant, which shows high intra-hour variability of its actual power output, with forecasting horizons covering the following day, have improved those obtained with other two power spot forecasting models, which are a persistence model and an artificial neural network model.

  8. Promoting similarity of model sparsity structures in integrative analysis of cancer genetic data.

    Science.gov (United States)

    Huang, Yuan; Liu, Jin; Yi, Huangdi; Shia, Ben-Chang; Ma, Shuangge

    2017-02-10

    In profiling studies, the analysis of a single dataset often leads to unsatisfactory results because of the small sample size. Multi-dataset analysis utilizes information of multiple independent datasets and outperforms single-dataset analysis. Among the available multi-dataset analysis methods, integrative analysis methods aggregate and analyze raw data and outperform meta-analysis methods, which analyze multiple datasets separately and then pool summary statistics. In this study, we conduct integrative analysis and marker selection under the heterogeneity structure, which allows different datasets to have overlapping but not necessarily identical sets of markers. Under certain scenarios, it is reasonable to expect some similarity of identified marker sets - or equivalently, similarity of model sparsity structures - across multiple datasets. However, the existing methods do not have a mechanism to explicitly promote such similarity. To tackle this problem, we develop a sparse boosting method. This method uses a BIC/HDBIC criterion to select weak learners in boosting and encourages sparsity. A new penalty is introduced to promote the similarity of model sparsity structures across datasets. The proposed method has a intuitive formulation and is broadly applicable and computationally affordable. In numerical studies, we analyze right censored survival data under the accelerated failure time model. Simulation shows that the proposed method outperforms alternative boosting and penalization methods with more accurate marker identification. The analysis of three breast cancer prognosis datasets shows that the proposed method can identify marker sets with increased similarity across datasets and improved prediction performance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Generalization of landslide susceptibility models in geologic-geomorphologic similar context

    Science.gov (United States)

    Piedade, Aldina; Zêzere, José Luis; António Tenedório, José; Garcia, Ricardo A. C.; Oliveira, Sérgio C.; Rocha, Jorge

    2010-05-01

    The region north of Lisbon, which is known by several forms of slope instability, is the study area of this study. Two sample areas were chosen having similar geological and geomorphological conditions to assess susceptibility regarding shallow translational slides occurrence. Landslide susceptibility was assessed using a bivariate statistical method (Information Value Method) and the developed methodology focuses on the exportation of susceptibility scores obtained in a sample area (modelling area of Fanhões-Trancão) to other area (validation area of Lousa-Loures) having similar geological and geomorphological features. The rationale is that similar environments should have identical landslide susceptibility, i.e., the same causes are likely to generate the same effects. Thus, scores of Information Value obtained in the modelling area of Fanhões-Trancão (20 km2) are used to evaluate the susceptibility in the validation area of Lousa-Loures (17 km2). The susceptibility scores were obtained for the modelling area by crossing the landslide layer (the dependent variable) with a set of 7 classified predisposing factors for slope instability (assumed as independent variables): slope, aspect, transverse slope profile, lithology, geomorphology, superficial deposits and land use. The same set of landslide predisposing factors was prepared for the validation area and we use the same criteria to define classes within each theme. Field work and aerial-photo interpretation were performed in the validation area and a landslide database was constructed and subsequently used to validate the landslide susceptibility model. In addition, new scores of Information Value were calculated for the validation area by crossing existing shallow translational slides with the predisposing factors of slope instability. Validation of predictive models is carried out by comparison of success-rate and prediction-rate curves. Furthermore, sensitivity analysis of the variables is performed in

  10. How glitter relates to gold : Similarity-dependent reward prediction errors in the human striatum

    NARCIS (Netherlands)

    Kahnt, T.; Park, S.Q.; Burke, C.; Tobler, P.N.

    2012-01-01

    Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel

  11. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  12. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  13. Sequence composition similarities with the 7SL RNA are highly predictive of functional genomic features

    OpenAIRE

    Paquet, Yanick; Anderson, Alan

    2010-01-01

    Transposable elements derived from the 7SL RNA gene, such as Alu elements in primates, have had remarkable success in several mammalian lineages. The results presented here show a broad spectrum of functions for genomic segments that display sequence composition similarities with the 7SL RNA gene. Using thoroughly documented loci, we report that DNaseI-hypersensitive sites can be singled out in large genomic sequences by an assessment of sequence composition similarities with the 7SL RNA gene...

  14. The Next Page Access Prediction Using Makov Model

    OpenAIRE

    Deepti Razdan

    2011-01-01

    Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentiald...

  15. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  16. The predictable narwhal: satellite tracking shows behavioural similarities between isolated subpopulations

    DEFF Research Database (Denmark)

    Heide-Jørgensen, M. P.; Nielsen, N.H.; Hansen, R. G.

    2015-01-01

    are examples of separated populations that, in different ocean parts, will be coping with similar anthropogenic and climate-driven habitat alterations. To study this, 28 narwhals from the Scoresby Sound fjord system were tracked by satellite in 2010-2013. The average duration of contact with the whales was 124...... days with one tag lasting 305 days and one whale recaptured...

  17. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  18. Opinion Dynamics of Social-Similarity-Based Hegselmann–Krause Model

    Directory of Open Access Journals (Sweden)

    Xi Chen

    2017-01-01

    Full Text Available The existing opinion dynamics models mainly concentrate on the impact of opinions on other opinions and ignore the effect of the social similarity between individuals. Social similarity between an individual and their neighbors will also affect their opinions in real life. Therefore, an opinion evolution model considering social similarity (social-similarity-based HK model, SSHK model for short is introduced in this paper. Social similarity is calculated using individual properties and is used to measure the social relationship between individuals. By considering the joint effect of confidence bounds and social similarity in this model, the role of neighbors’ selection is changed significantly in the process of the evolution of opinions. Numerical results demonstrate that the new model can not only obtain the salient features of the opinion result, namely, fragmentation, polarization, and consensus, but also achieve consensus more easily under the appropriate similarity threshold. In addition, the improved model with heterogeneous and homogeneous confidence bounds and similarity thresholds are also discussed. We found that the improved heterogeneous SSHK model could acquire opinion consensus results more easily than the homogeneous SSHK model and the classical models when the confidence bound was related to the similarity threshold. This finding provides a new way of thinking and a theoretical basis for the guidance of public opinion in real life.

  19. Similarity-Based Prediction of Travel Times for Vehicles Traveling on Known Routes

    DEFF Research Database (Denmark)

    Tiesyte, Dalia; Jensen, Christian Søndergaard

    2008-01-01

    The use of centralized, real-time position tracking is proliferating in the areas of logistics and public transportation. Real-time positions can be used to provide up-to-date information to a variety of users, and they can also be accumulated for uses in subsequent data analyses. In particular......, historical data in combination with real-time data may be used to predict the future travel times of vehicles more accurately, thus improving the experience of the users who rely on such information. We propose a Nearest-Neighbor Trajectory (NNT) technique that identifies the historical trajectory...

  20. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  1. MAC/FAC: A Model of Similarity-Based Retrieval. Technical Report #59.

    Science.gov (United States)

    Forbus, Kenneth D.; And Others

    A model of similarity-based retrieval is presented that attempts to capture these seemingly contradictory psychological phenomena: (1) structural commonalities are weighed more heavily than surface commonalities in soundness or similarity judgments (when both members are present); (2) superficial similarity is more important in retrieval from…

  2. Stereotype content model across cultures: Towards universal similarities and some differences

    Science.gov (United States)

    Cuddy, Amy J. C.; Fiske, Susan T.; Kwan, Virginia S. Y.; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2014-01-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N = 1, 028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies. PMID:19178758

  3. Stereotype content model across cultures: towards universal similarities and some differences.

    Science.gov (United States)

    Cuddy, Amy J C; Fiske, Susan T; Kwan, Virginia S Y; Glick, Peter; Demoulin, Stéphanie; Leyens, Jacques-Philippe; Bond, Michael Harris; Croizet, Jean-Claude; Ellemers, Naomi; Sleebos, Ed; Htun, Tin Tin; Kim, Hyun-Jeong; Maio, Greg; Perry, Judi; Petkova, Kristina; Todorov, Valery; Rodríguez-Bailón, Rosa; Morales, Elena; Moya, Miguel; Palacios, Marisol; Smith, Vanessa; Perez, Rolando; Vala, Jorge; Ziegler, Rene

    2009-03-01

    The stereotype content model (SCM) proposes potentially universal principles of societal stereotypes and their relation to social structure. Here, the SCM reveals theoretically grounded, cross-cultural, cross-groups similarities and one difference across 10 non-US nations. Seven European (individualist) and three East Asian (collectivist) nations (N=1,028) support three hypothesized cross-cultural similarities: (a) perceived warmth and competence reliably differentiate societal group stereotypes; (b) many out-groups receive ambivalent stereotypes (high on one dimension; low on the other); and (c) high status groups stereotypically are competent, whereas competitive groups stereotypically lack warmth. Data uncover one consequential cross-cultural difference: (d) the more collectivist cultures do not locate reference groups (in-groups and societal prototype groups) in the most positive cluster (high-competence/high-warmth), unlike individualist cultures. This demonstrates out-group derogation without obvious reference-group favouritism. The SCM can serve as a pancultural tool for predicting group stereotypes from structural relations with other groups in society, and comparing across societies.

  4. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  5. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  6. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  7. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  8. Prediction of Associations between OMIM Diseases and MicroRNAs by Random Walk on OMIM Disease Similarity Network

    Directory of Open Access Journals (Sweden)

    Hailin Chen

    2013-01-01

    Full Text Available Increasing evidence has revealed that microRNAs (miRNAs play important roles in the development and progression of human diseases. However, efforts made to uncover OMIM disease-miRNA associations are lacking and the majority of diseases in the OMIM database are not associated with any miRNA. Therefore, there is a strong incentive to develop computational methods to detect potential OMIM disease-miRNA associations. In this paper, random walk on OMIM disease similarity network is applied to predict potential OMIM disease-miRNA associations under the assumption that functionally related miRNAs are often associated with phenotypically similar diseases. Our method makes full use of global disease similarity values. We tested our method on 1226 known OMIM disease-miRNA associations in the framework of leave-one-out cross-validation and achieved an area under the ROC curve of 71.42%. Excellent performance enables us to predict a number of new potential OMIM disease-miRNA associations and the newly predicted associations are publicly released to facilitate future studies. Some predicted associations with high ranks were manually checked and were confirmed from the publicly available databases, which was a strong evidence for the practical relevance of our method.

  9. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  10. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    Science.gov (United States)

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  12. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  13. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  14. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  15. Morbidity Rate Prediction of Dengue Hemorrhagic Fever (DHF) Using the Support Vector Machine and the Aedes aegypti Infection Rate in Similar Climates and Geographical Areas.

    Science.gov (United States)

    Kesorn, Kraisak; Ongruk, Phatsavee; Chompoosri, Jakkrawarn; Phumee, Atchara; Thavara, Usavadee; Tawatsin, Apiwat; Siriyasatien, Padet

    2015-01-01

    In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and

  16. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II

    Directory of Open Access Journals (Sweden)

    Sílvia Cristina Sobottka Rolim de Moura

    2005-09-01

    Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of

  17. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  18. Filling Predictable and Unpredictable Gaps, with and without Similarity-Based Interference: Evidence for LIFG Effects of Dependency Processing.

    Science.gov (United States)

    Leiken, Kimberly; McElree, Brian; Pylkkänen, Liina

    2015-01-01

    One of the most replicated findings in neurolinguistic literature on syntax is the increase of hemodynamic activity in the left inferior frontal gyrus (LIFG) in response to object relative (OR) clauses compared to subject relative clauses. However, behavioral studies have shown that ORs are primarily only costly when similarity-based interference is involved and recently, Leiken and Pylkkänen (2014) showed with magnetoencephalography (MEG) that an LIFG increase at an OR gap is also dependent on such interference. However, since ORs always involve a cue indicating an upcoming dependency formation, OR dependencies could be processed already prior to the gap-site and thus show no sheer dependency effects at the gap itself. To investigate the role of gap predictability in LIFG dependency effects, this MEG study compared ORs to verb phrase ellipsis (VPE), which was used as an example of a non-predictable dependency. Additionally, we explored LIFG sensitivity to filler-gap order by including right node raising structures, in which the order of filler and gap is reverse to that of ORs and VPE. Half of the stimuli invoked similarity-based interference and half did not. Our results demonstrate that LIFG effects of dependency can be elicited regardless of whether the dependency is predictable, the stimulus materials evoke similarity-based interference, or the filler precedes the gap. Thus, contrary to our own prior data, the current findings suggest a highly general role for the LIFG in dependency interpretation that is not limited to environments involving similarity-based interference. Additionally, the millisecond time-resolution of MEG allowed for a detailed characterization of the temporal profiles of LIFG dependency effects across our three constructions, revealing that the timing of these effects is somewhat construction-specific.

  19. Filling Predictable and Unpredictable Gaps, with and without Similarity-Based Interference: Evidence for LIFG Effects of Dependency Processing

    Science.gov (United States)

    Leiken, Kimberly; McElree, Brian; Pylkkänen, Liina

    2015-01-01

    One of the most replicated findings in neurolinguistic literature on syntax is the increase of hemodynamic activity in the left inferior frontal gyrus (LIFG) in response to object relative (OR) clauses compared to subject relative clauses. However, behavioral studies have shown that ORs are primarily only costly when similarity-based interference is involved and recently, Leiken and Pylkkänen (2014) showed with magnetoencephalography (MEG) that an LIFG increase at an OR gap is also dependent on such interference. However, since ORs always involve a cue indicating an upcoming dependency formation, OR dependencies could be processed already prior to the gap-site and thus show no sheer dependency effects at the gap itself. To investigate the role of gap predictability in LIFG dependency effects, this MEG study compared ORs to verb phrase ellipsis (VPE), which was used as an example of a non-predictable dependency. Additionally, we explored LIFG sensitivity to filler-gap order by including right node raising structures, in which the order of filler and gap is reverse to that of ORs and VPE. Half of the stimuli invoked similarity-based interference and half did not. Our results demonstrate that LIFG effects of dependency can be elicited regardless of whether the dependency is predictable, the stimulus materials evoke similarity-based interference, or the filler precedes the gap. Thus, contrary to our own prior data, the current findings suggest a highly general role for the LIFG in dependency interpretation that is not limited to environments involving similarity-based interference. Additionally, the millisecond time-resolution of MEG allowed for a detailed characterization of the temporal profiles of LIFG dependency effects across our three constructions, revealing that the timing of these effects is somewhat construction-specific. PMID:26635655

  20. Filling predictable and unpredictable gaps, with and without similarity-based interference: Evidence for LIFG effects of dependency processing

    Directory of Open Access Journals (Sweden)

    Kimberly eLeiken

    2015-11-01

    Full Text Available One of the most replicated findings in neurolinguistic literature on syntax is the increase of hemodynamic activity in the left inferior frontal gyrus (LIFG in response to object relative clauses (ORs compared to subject relative clauses (SRs. However, behavioral studies have shown that ORs are primarily only costly when similarity-based interference is involved and recently, Leiken & Pylkkänen (2014 showed with magnetoencephalography (MEG that an LIFG increase at an OR gap is also dependent on such interference. However, since ORs always involve a cue indicating an upcoming dependency formation, OR dependencies could be processed already prior to the gap-site and thus show no sheer dependency effects at the gap itself. To investigate the role of gap predictability in LIFG dependency effects, this MEG study compared ORs to Verb Phrase Ellipsis (VPE, which was used as an example of a non-predictable dependency. Additionally, we explored LIFG sensitivity to filler-gap order by including Right Node Raising (RNR structures, in which the order of filler and gap is reverse to that of ORs and VPE. Half of the stimuli invoked similarity-based interference and half did not. Our results demonstrate that LIFG effects of dependency can be elicited regardless of whether the dependency is predictable, the stimulus materials evoke similarity-based interference, or the filler precedes the gap. Thus, contrary to our own prior data, the current findings suggest a highly general role for the LIFG in dependency interpretation that is not limited to environments involving similarity-based interference. Additionally, the millisecond time-resolution of MEG allowed for a detailed characterization of the temporal profiles of LIFG dependency effects across our three constructions, revealing that the timing of these effects is somewhat construction-specific.

  1. Similarity of models of the observed navigational situation as multicriteria objects with probabilistic priorities

    Directory of Open Access Journals (Sweden)

    Popov Yu.A.

    2015-03-01

    Full Text Available The variant of calculation of relation of similarity of two models of navigational situation as multicriteria objects with probabilistic priorities has been considered. The priorities have been received with the help of the vessel system of observation

  2. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  3. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  4. Expression Pattern Similarities Support the Prediction of Orthologs Retaining Common Functions after Gene Duplication Events1[OPEN

    Science.gov (United States)

    Haberer, Georg; Panda, Arup; Das Laha, Shayani; Ghosh, Tapas Chandra; Schäffner, Anton R.

    2016-01-01

    The identification of functionally equivalent, orthologous genes (functional orthologs) across genomes is necessary for accurate transfer of experimental knowledge from well-characterized organisms to others. This frequently relies on automated, coding sequence-based approaches such as OrthoMCL, Inparanoid, and KOG, which usually work well for one-to-one homologous states. However, this strategy does not reliably work for plants due to the occurrence of extensive gene/genome duplication. Frequently, for one query gene, multiple orthologous genes are predicted in the other genome, and it is not clear a priori from sequence comparison and similarity which one preserves the ancestral function. We have studied 11 organ-dependent and stress-induced gene expression patterns of 286 Arabidopsis lyrata duplicated gene groups and compared them with the respective Arabidopsis (Arabidopsis thaliana) genes to predict putative expressologs and nonexpressologs based on gene expression similarity. Promoter sequence divergence as an additional tool to substantiate functional orthology only partially overlapped with expressolog classification. By cloning eight A. lyrata homologs and complementing them in the respective four Arabidopsis loss-of-function mutants, we experimentally proved that predicted expressologs are indeed functional orthologs, while nonexpressologs or nonfunctionalized orthologs are not. Our study demonstrates that even a small set of gene expression data in addition to sequence homologies are instrumental in the assignment of functional orthologs in the presence of multiple orthologs. PMID:27303025

  5. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  6. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Directory of Open Access Journals (Sweden)

    Kamila M. Jozwik

    2017-10-01

    Full Text Available Recent advances in Deep convolutional Neural Networks (DNNs have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16. To create conceptual models, other human observers generated visual-feature labels (e.g., “eye” and category labels (e.g., “animal” for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from

  7. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  8. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  9. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  10. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  11. On Measuring Process Model Similarity based on High-level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2007-01-01

    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be

  12. On Measuring Process Model Similarity Based on High-Level Change Operations

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas; Li, Qing

    2008-01-01

    For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be

  13. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    Science.gov (United States)

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  14. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  15. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  16. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  17. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  18. Detecting Local Residue Environment Similarity for Recognizing Near-Native Structure Models

    Science.gov (United States)

    Kim, Hyungrae; Kihara, Daisuke

    2014-01-01

    We developed a new representation of local amino acid environments in protein structures called the Side-chain Depth Environment (SDE). An SDE defines a local structural environment of a residue considering the coordinates and the depth of amino acids that locate in the vicinity of the side-chain centroid of the residue. SDEs are general enough that similar SDEs are found in protein structures with globally different folds. Using SDEs, we developed a procedure called PRESCO (Protein Residue Environment SCOre) for selecting native or near-native models from a pool of computational models. The procedure searches similar residue environments observed in a query model against a set of representative native protein structures to quantify how native-like SDEs in the model are. When benchmarked on commonly used computational model datasets, our PRESCO compared favorably with the other existing scoring functions in selecting native and near-native models. PMID:25132526

  19. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  20. Experimental comparison of the ability of Dalton based and similarity theory correlations to predict water evaporation rate in different convection regimes

    Science.gov (United States)

    Jodat, Amin; Moghiman, Mohammad; Anbarsooz, Morteza

    2012-08-01

    This paper investigates the ability of two widely used evaporation models: Dalton based correlations and similarity theory results by comparing with experimental measurements. A series of experimental investigations are carried out over a wide range of water temperatures and air velocities for 0.01 ≤ Gr/ Re 2 ≤ 100 in a rectangular heated pool. The results show that for forced convection regime satisfactory results can be achieved by using the modified Dalton correlations, while, due to ripples appear on the water free surface, similarity theory under predicts the evaporation rate. In the free convection regime, Dalton based correlations even with modification are not able to predict acceptable results. For mixed convection regime, although both the similarity theory and Dalton based correlations without modification are not able to predict the mild non-linearity behavior between water evaporation rate and vapor pressure difference, but they obtain relatively satisfactory results. A dimensionless correlation using the experimental data of all convection regimes is proposed to cover different water surface geometries and air flow conditions.

  1. On two-layer models and the similarity functions for the PBL

    Science.gov (United States)

    Brown, R. A.

    1982-01-01

    An operational Planetary Boundary Layer model which employs similarity principles and two-layer patching to provide state-of-the-art parameterization for the PBL flow is used to study the popularly used similarity functions, A and B. The expected trends with stratification are shown. The effects of baroclinicity, secondary flow, humidity, latitude, surface roughness variation and choice of characteristic height scale are discussed.

  2. Similarity and accuracy of mental models formed during nursing handovers: A concept mapping approach.

    Science.gov (United States)

    Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat

    2017-09-01

    Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, pmaps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Different combinations of atomic interactions predict protein-small molecule and protein-DNA/RNA affinities with similar accuracy.

    Science.gov (United States)

    Dias, Raquel; Kolazckowski, Bryan

    2015-11-01

    Interactions between proteins and other molecules play essential roles in all biological processes. Although it is widely held that a protein's ligand specificity is determined primarily by its three-dimensional structure, the general principles by which structure determines ligand binding remain poorly understood. Here we use statistical analyses of a large number of protein-ligand complexes with associated binding-affinity measurements to quantitatively characterize how combinations of atomic interactions contribute to ligand affinity. We find that there are significant differences in how atomic interactions determine ligand affinity for proteins that bind small chemical ligands, those that bind DNA/RNA and those that interact with other proteins. Although protein-small molecule and protein-DNA/RNA binding affinities can be accurately predicted from structural data, models predicting one type of interaction perform poorly on the others. Additionally, the particular combinations of atomic interactions required to predict binding affinity differed between small-molecule and DNA/RNA data sets, consistent with the conclusion that the structural bases determining ligand affinity differ among interaction types. In contrast to what we observed for small-molecule and DNA/RNA interactions, no statistical models were capable of predicting protein-protein affinity with >60% correlation. We demonstrate the potential usefulness of protein-DNA/RNA binding prediction as a possible tool for high-throughput virtual screening to guide laboratory investigations, suggesting that quantitative characterization of diverse molecular interactions may have practical applications as well as fundamentally advancing our understanding of how molecular structure translates into function. © 2015 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.

  4. Similarities and differences in gastrointestinal physiology between neonates and adults: a physiologically based pharmacokinetic modeling perspective.

    Science.gov (United States)

    Yu, Guo; Zheng, Qing-Shan; Li, Guo-Fu

    2014-11-01

    Physiologically based pharmacokinetic (PBPK) modeling holds great promise for anticipating the quantitative changes of pharmacokinetics in pediatric populations relative to adults, which has served as a useful tool in regulatory reviews. Although the availability of specialized software for PBPK modeling has facilitated the widespread applications of this approach in regulatory submissions, challenges in the implementation and interpretation of pediatric PBPK models remain great, for which controversies and knowledge gaps remain regarding neonatal development of the gastrointestinal tract. The commentary highlights the similarities and differences in the gastrointestinal pH and transit time between neonates and adults from a PBPK modeling prospective. Understanding the similarities and differences in these physiological parameters governing oral absorption would promote good practice in the use of pediatric PBPK modeling to assess oral exposure and pharmacokinetics in neonates.

  5. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  6. Genome-Wide Expression Profiling of Five Mouse Models Identifies Similarities and Differences with Human Psoriasis

    Science.gov (United States)

    Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.

    2011-01-01

    Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1). While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis. PMID:21483750

  7. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  8. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    Directory of Open Access Journals (Sweden)

    Yang Zhou

    2016-01-01

    Full Text Available It is an important content to generate visual place cells (VPCs in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs’ generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs’ firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF and firing rate’s threshold (FRT.

  9. Computational hydrodynamic comparison of a mini vessel and a USP 2 dissolution testing system to predict the dynamic operating conditions for similarity of dissolution performance.

    Science.gov (United States)

    Wang, Bing; Bredael, Gerard; Armenante, Piero M

    2018-03-25

    The hydrodynamic characteristics of a mini vessel and a USP 2 dissolution testing system were obtained and compared to predict the tablet-liquid mass transfer coefficient from velocity distributions near the tablet and establish the dynamic operating conditions under which dissolution in mini vessels could be conducted to generate concentration profiles similar to those in the USP 2. Velocity profiles were obtained experimentally using Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) was used to predict the velocity distribution and strain rate around a model tablet. A CFD-based mass transfer model was also developed. When plotted against strain rate, the predicted tablet-liquid mass transfer coefficient was found to be independent of the system where it was obtained, implying that a tablet would dissolve at the same rate in both systems provided that the concentration gradient between the tablet surface and the bulk is the same, the tablet surface area per unit liquid volume is identical, and the two systems are operated at the appropriate agitation speeds specified in this work. The results of this work will help dissolution scientists operate mini vessels so as to predict the dissolution profiles in the USP 2, especially during the early stages of drug development. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  11. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  12. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  13. Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test

    Directory of Open Access Journals (Sweden)

    Chichun Hu

    2017-01-01

    Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.

  14. Bianchi VI{sub 0} and III models: self-similar approach

    Energy Technology Data Exchange (ETDEWEB)

    Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)

    2009-09-07

    We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.

  15. Numerical model of a non-steady atmospheric planetary boundary layer, based on similarity theory

    DEFF Research Database (Denmark)

    Zilitinkevich, S.S.; Fedorovich, E.E.; Shabalova, M.V.

    1992-01-01

    A numerical model of a non-stationary atmospheric planetary boundary layer (PBL) over a horizontally homogeneous flat surface is derived on the basis of similarity theory. The two most typical turbulence regimes are reproduced: one corresponding to a convectively growing PBL and another correspon...

  16. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  17. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Schoolchildren have similar predicted prevalences of inadequate intakes as toddlers in village populations in Egypt, Kenya, and Mexico.

    Science.gov (United States)

    Murphy, S P; Calloway, D H; Beaton, G H

    1995-09-01

    To examine the probability of inadequacy of schoolchild nutrient intakes; to compare the predicted adequacy of schoolchild diets to previously published data for toddler diets and to examine the relative adequacy of the diets of sibling pairs. Food consumption data were measured 2 days per month across approximately one year. Mean nutrient intakes were calculated and compared to international standards using a probability approach. Villages in Egypt, Kenya, and Mexico. 285 school-age children (7-9 years of age) and 255 toddlers (18-30 months of age) including 94 sibling pairs. Within a country, there was a striking similarity between the nutrients predicted to be inadequate in schoolchild and toddler diets across all nutrients except vitamin B12, which was more likely to be inadequate among schoolchildren. Nutrients of concern for both ages in one or more countries included iron, zinc, calcium, riboflavin, and vitamins B12, D, E, and A. For 94 toddler-schoolchild pairs, siblings at high risk of nutrient inadequacy could be identified from an indicator child with a sensitivity of 43-100% and a specificity of 59-100%. Nutrient adequacy tends to be a household characteristic for these two ages of children. Nutritional interventions that target the household have a high probability of benefitting multiple age groups of children.

  9. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  10. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  11. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  12. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  13. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  14. Climate predictability and prediction skill on seasonal time scales over South America from CHFP models

    Science.gov (United States)

    Osman, Marisol; Vera, C. S.

    2017-10-01

    This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to

  15. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  16. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  17. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  18. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  19. A general model for metabolic scaling in self-similar asymmetric networks.

    Directory of Open Access Journals (Sweden)

    Alexander Byers Brummer

    2017-03-01

    Full Text Available How a particular attribute of an organism changes or scales with its body size is known as an allometry. Biological allometries, such as metabolic scaling, have been hypothesized to result from selection to maximize how vascular networks fill space yet minimize internal transport distances and resistances. The West, Brown, Enquist (WBE model argues that these two principles (space-filling and energy minimization are (i general principles underlying the evolution of the diversity of biological networks across plants and animals and (ii can be used to predict how the resulting geometry of biological networks then governs their allometric scaling. Perhaps the most central biological allometry is how metabolic rate scales with body size. A core assumption of the WBE model is that networks are symmetric with respect to their geometric properties. That is, any two given branches within the same generation in the network are assumed to have identical lengths and radii. However, biological networks are rarely if ever symmetric. An open question is: Does incorporating asymmetric branching change or influence the predictions of the WBE model? We derive a general network model that relaxes the symmetric assumption and define two classes of asymmetrically bifurcating networks. We show that asymmetric branching can be incorporated into the WBE model. This asymmetric version of the WBE model results in several theoretical predictions for the structure, physiology, and metabolism of organisms, specifically in the case for the cardiovascular system. We show how network asymmetry can now be incorporated in the many allometric scaling relationships via total network volume. Most importantly, we show that the 3/4 metabolic scaling exponent from Kleiber's Law can still be attained within many asymmetric networks.

  20. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  1. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  2. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  3. Towards predictive resistance models for agrochemicals by combining chemical and protein similarity via proteochemometric modelling

    OpenAIRE

    van Westen, Gerard J. P.; Bender, Andreas; Overington, John P.

    2014-01-01

    Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of ‘orthogonally resistant’ agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed ‘proteochemometric modelling’ (PCM), from the field of chemoinformatics, could ai...

  4. Risk Prediction Models for Oral Clefts Allowing for Phenotypic Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yalu eWen

    2015-08-01

    Full Text Available Oral clefts are common birth defects that have a major impact on the affected individual, their family and society. World-wide, the incidence of oral clefts is 1/700 live births, making them the most common craniofacial birth defects. The successful prediction of oral clefts may help identify sub-population at high risk, and promote new diagnostic and therapeutic strategies. Nevertheless, developing a clinically useful oral clefts risk prediction model remains a great challenge. Compelling evidences suggest the etiologies of oral clefts are highly heterogeneous, and the development of a risk prediction model with consideration of phenotypic heterogeneity may potentially improve the accuracy of a risk prediction model. In this study, we applied a previously developed statistical method to investigate the risk prediction on sub-phenotypes of oral clefts. Our results suggested subtypes of cleft lip and palate have similar genetic etiologies (AUC=0.572 with subtypes of cleft lip only (AUC=0.589, while the subtypes of cleft palate only (CPO have heterogeneous underlying mechanisms (AUCs for soft CPO and hard CPO are 0.617 and 0.623, respectively. This highlighted the potential that the hard and soft forms of CPO have their own mechanisms despite sharing some of the genetic risk factors. Comparing with conventional methods for risk prediction modeling, our method considers phenotypic heterogeneity of a disease, which potentially improves the accuracy for predicting each sub-phenotype of oral clefts.

  5. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  6. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. The Similar Structure Method for Solving the Model of Fractal Dual-Porosity Reservoir

    Directory of Open Access Journals (Sweden)

    Li Xu

    2013-01-01

    Full Text Available This paper proposes a similar structure method (SSM to solve the boundary value problem of the extended modified Bessel equation. The method could efficiently solve a second-order linear homogeneous differential equation’s boundary value problem and obtain its solutions’ similar structure. A mathematics model is set up on the dual-porosity media, in which the influence of fractal dimension, spherical flow, wellbore storage, and skin factor is taken into cosideration. Researches in the model found that it was a special type of the extended modified Bessel equation in Laplace space. Then, the formation pressure and wellbore pressure under three types of outer boundaries (infinite, constant pressure, and closed are obtained via SSM in Laplace space. Combining SSM with the Stehfest algorithm, we propose the similar structure method algorithm (SSMA which can be used to calculate wellbore pressure and pressure derivative of reservoir seepage models clearly. Type curves of fractal dual-porosity spherical flow are plotted by SSMA. The presented algorithm promotes the development of well test analysis software.

  8. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  9. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  10. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  11. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  12. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  13. State impulsive control strategies for a two-languages competitive model with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo

    2015-07-01

    For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.

  14. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  15. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  16. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  17. Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor

    Directory of Open Access Journals (Sweden)

    Ye Li

    2017-01-01

    Full Text Available Recommended system is beneficial to e-commerce sites, which provides customers with product information and recommendations; the recommendation system is currently widely used in many fields. In an era of information explosion, the key challenges of the recommender system is to obtain valid information from the tremendous amount of information and produce high quality recommendations. However, when facing the large mount of information, the traditional collaborative filtering algorithm usually obtains a high degree of sparseness, which ultimately lead to low accuracy recommendations. To tackle this issue, we propose a novel algorithm named Collaborative Filtering Recommendation Based on Trust Model with Fused Similar Factor, which is based on the trust model and is combined with the user similarity. The novel algorithm takes into account the degree of interest overlap between the two users and results in a superior performance to the recommendation based on Trust Model in criteria of Precision, Recall, Diversity and Coverage. Additionally, the proposed model can effectively improve the efficiency of collaborative filtering algorithm and achieve high performance.

  18. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  19. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  20. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  1. Self-similar measures in multi-sector endogenous growth models

    International Nuclear Information System (INIS)

    La Torre, Davide; Marsiglio, Simone; Mendivil, Franklin; Privileggi, Fabio

    2015-01-01

    We analyze two types of stochastic discrete time multi-sector endogenous growth models, namely a basic Uzawa–Lucas (1965, 1988) model and an extended three-sector version as in La Torre and Marsiglio (2010). As in the case of sustained growth the optimal dynamics of the state variables are not stationary, we focus on the dynamics of the capital ratio variables, and we show that, through appropriate log-transformations, they can be converted into affine iterated function systems converging to an invariant distribution supported on some (possibly fractal) compact set. This proves that also the steady state of endogenous growth models—i.e., the stochastic balanced growth path equilibrium—might have a fractal nature. We also provide some sufficient conditions under which the associated self-similar measures turn out to be either singular or absolutely continuous (for the three-sector model we only consider the singularity).

  2. A compressible wall-adapting similarity mixed model for large-eddy simulation of the impinging round jet

    Science.gov (United States)

    Lodato, Guido; Vervisch, Luc; Domingo, Pascale

    2009-03-01

    Wall-jet interaction is studied with large-eddy simulation (LES) in which a mixed-similarity subgrid scale (SGS) closure is combined with the wall-adapting local eddy-viscosity (WALE) model for the eddy-viscosity term. The macrotemperature and macropressure are introduced to deduce a weakly compressible form of the mixed-similarity model, and the relevant formulation for the energy equation is deduced accordingly. LES prediction capabilities are assessed by comparing flow statistical properties against experiment of an unconfined impinging round jet at Reynolds numbers of 23 000 and 70 000. To quantify the benefit of the proposed WALE-similarity mixed model, the lower Reynolds number simulations are also performed using the standard WALE and Lagrangian dynamic Smagorinsky approaches. The unsteady compressible Navier-Stokes equations are integrated over 2.9 M, 3.5 M, and 5.5 M node Cartesian grids with an explicit fourth-order finite volume solver. Nonreflecting boundary conditions are enforced using a methodology accounting for the three-dimensional character of the turbulent flow at boundaries. A correct wall scaling is achieved from the combination of similarity and WALE approaches; for this wall-jet interaction, the SGS closure terms can be computed in the near-wall region without the necessity of resorting to additional specific treatments. The possible impact of turbulent energy backscatter in such flow configurations is also addressed. It is found that, for the present configuration, the correct reproduction of reverse energy transfer plays a key role in the estimation of near-wall statistics, especially when the viscous sublayer is not properly resolved.

  3. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  4. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  5. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  6. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  7. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  8. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  9. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  10. Modeling of locally self-similar processes using multifractional Brownian motion of Riemann-Liouville type

    Science.gov (United States)

    Muniandy, S. V.; Lim, S. C.

    2001-04-01

    Fractional Brownian motion (FBM) is widely used in the modeling of phenomena with power spectral density of power-law type. However, FBM has its limitation since it can only describe phenomena with monofractal structure or a uniform degree of irregularity characterized by the constant Holder exponent. For more realistic modeling, it is necessary to take into consideration the local variation of irregularity, with the Holder exponent allowed to vary with time (or space). One way to achieve such a generalization is to extend the standard FBM to multifractional Brownian motion (MBM) indexed by a Holder exponent that is a function of time. This paper proposes an alternative generalization to MBM based on the FBM defined by the Riemann-Liouville type of fractional integral. The local properties of the Riemann-Liouville MBM (RLMBM) are studied and they are found to be similar to that of the standard MBM. A numerical scheme to simulate the locally self-similar sample paths of the RLMBM for various types of time-varying Holder exponents is given. The local scaling exponents are estimated based on the local growth of the variance and the wavelet scalogram methods. Finally, an example of the possible applications of RLMBM in the modeling of multifractal time series is illustrated.

  11. Vertex labeling and routing in self-similar outerplanar unclustered graphs modeling complex networks

    International Nuclear Information System (INIS)

    Comellas, Francesc; Miralles, Alicia

    2009-01-01

    This paper introduces a labeling and optimal routing algorithm for a family of modular, self-similar, small-world graphs with clustering zero. Many properties of this family are comparable to those of networks associated with technological and biological systems with low clustering, such as the power grid, some electronic circuits and protein networks. For these systems, the existence of models with an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization) and also to understand the underlying mechanisms that have shaped their particular structure.

  12. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  13. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  14. Self-similarities of periodic structures for a discrete model of a two-gene system

    Energy Technology Data Exchange (ETDEWEB)

    Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)

    2012-03-12

    We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.

  15. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  16. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  17. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  18. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  19. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  20. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  1. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  2. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  3. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  4. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  5. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  6. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  7. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  8. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  9. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  11. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  12. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  13. The synaptonemal complex of basal metazoan hydra: more similarities to vertebrate than invertebrate meiosis model organisms.

    Science.gov (United States)

    Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo

    2014-03-20

    The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized. Copyright © 2014. Published by Elsevier Ltd.

  14. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  15. Similar Biophysical Abnormalities in Glomeruli and Podocytes from Two Distinct Models.

    Science.gov (United States)

    Embry, Addie E; Liu, Zhenan; Henderson, Joel M; Byfield, F Jefferson; Liu, Liping; Yoon, Joonho; Wu, Zhenzhen; Cruz, Katrina; Moradi, Sara; Gillombardo, C Barton; Hussain, Rihanna Z; Doelger, Richard; Stuve, Olaf; Chang, Audrey N; Janmey, Paul A; Bruggeman, Leslie A; Miller, R Tyler

    2018-03-23

    Background FSGS is a pattern of podocyte injury that leads to loss of glomerular function. Podocytes support other podocytes and glomerular capillary structure, oppose hemodynamic forces, form the slit diaphragm, and have mechanical properties that permit these functions. However, the biophysical characteristics of glomeruli and podocytes in disease remain unclear. Methods Using microindentation, atomic force microscopy, immunofluorescence microscopy, quantitative RT-PCR, and a three-dimensional collagen gel contraction assay, we studied the biophysical and structural properties of glomeruli and podocytes in chronic (Tg26 mice [HIV protein expression]) and acute (protamine administration [cytoskeletal rearrangement]) models of podocyte injury. Results Compared with wild-type glomeruli, Tg26 glomeruli became progressively more deformable with disease progression, despite increased collagen content. Tg26 podocytes had disordered cytoskeletons, markedly abnormal focal adhesions, and weaker adhesion; they failed to respond to mechanical signals and exerted minimal traction force in three-dimensional collagen gels. Protamine treatment had similar but milder effects on glomeruli and podocytes. Conclusions Reduced structural integrity of Tg26 podocytes causes increased deformability of glomerular capillaries and limits the ability of capillaries to counter hemodynamic force, possibly leading to further podocyte injury. Loss of normal podocyte mechanical integrity could injure neighboring podocytes due to the absence of normal biophysical signals required for podocyte maintenance. The severe defects in podocyte mechanical behavior in the Tg26 model may explain why Tg26 glomeruli soften progressively, despite increased collagen deposition, and may be the basis for the rapid course of glomerular diseases associated with severe podocyte injury. In milder injury (protamine), similar processes occur but over a longer time. Copyright © 2018 by the American Society of Nephrology.

  16. Intra prediction based on Markov process modeling of images.

    Science.gov (United States)

    Kamisli, Fatih

    2013-10-01

    In recent video coding standards, intraprediction of a block of pixels is performed by copying neighbor pixels of the block along an angular direction inside the block. Each block pixel is predicted from only one or few directionally aligned neighbor pixels of the block. Although this is a computationally efficient approach, it ignores potentially useful correlation of other neighbor pixels of the block. To use this correlation, a general linear prediction approach is proposed, where each block pixel is predicted using a weighted sum of all neighbor pixels of the block. The disadvantage of this approach is the increased complexity because of the large number of weights. In this paper, we propose an alternative approach to intraprediction, where we model image pixels with a Markov process. The Markov process model accounts for the ignored correlation in standard intraprediction methods, but uses few neighbor pixels and enables a computationally efficient recursive prediction algorithm. Compared with the general linear prediction approach that has a large number of independent weights, the Markov process modeling approach uses a much smaller number of independent parameters and thus offers significantly reduced memory or computation requirements, while achieving similar coding gains with offline computed parameters.

  17. Models of Affective Decision Making: How Do Feelings Predict Choice?

    Science.gov (United States)

    Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali

    2016-06-01

    Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. © The Author(s) 2016.

  18. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  19. Predictive models in cancer management: A guide for clinicians.

    Science.gov (United States)

    Kazem, Mohammed Ali

    2017-04-01

    Predictive tools in cancer management are used to predict different outcomes including survival probability or risk of recurrence. The uptake of these tools by clinicians involved in cancer management has not been as common as other clinical tools, which may be due to the complexity of some of these tools or a lack of understanding of how they can aid decision-making in particular clinical situations. The aim of this article is to improve clinicians' knowledge and understanding of predictive tools used in cancer management, including how they are built, how they can be applied to medical practice, and what their limitations may be. Literature review was conducted to investigate the role of predictive tools in cancer management. All predictive models share similar characteristics, but depending on the type of the tool its ability to predict an outcome will differ. Each type has its own pros and cons, and its generalisability will depend on the cohort used to build the tool. These factors will affect the clinician's decision whether to apply the model to their cohort or not. Before a model is used in clinical practice, it is important to appreciate how the model is constructed, what its use may add over and above traditional decision-making tools, and what problems or limitations may be associated with it. Understanding all the above is an important step for any clinician who wants to decide whether or not use predictive tools in their practice. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  20. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  1. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  2. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  3. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  4. Comparison of microarray-predicted closest genomes to sequencing for poliovirus vaccine strain similarity and influenza A phylogeny.

    Science.gov (United States)

    Maurer-Stroh, Sebastian; Lee, Charlie W H; Patel, Champa; Lucero, Marilla; Nohynek, Hanna; Sung, Wing-Kin; Murad, Chrysanti; Ma, Jianmin; Hibberd, Martin L; Wong, Christopher W; Simões, Eric A F

    2016-03-01

    We evaluate sequence data from the PathChip high-density hybridization array for epidemiological interpretation of detected pathogens. For influenza A, we derive similar relative outbreak clustering in phylogenetic trees from PathChip-derived compared to classical Sanger-derived sequences. For a positive polio detection, recent infection could be excluded based on vaccine strain similarity. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  6. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  7. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  8. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  9. More Similar than Different? Exploring Cultural Models of Depression among Latino Immigrants in Florida

    Directory of Open Access Journals (Sweden)

    Dinorah (Dina Martinez Tyson

    2011-01-01

    Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.

  10. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  11. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  12. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  13. Comparison of hospital charge prediction models for colorectal cancer patients: neural network vs. decision tree models.

    Science.gov (United States)

    Lee, Seung-Mi; Kang, Jin-Oh; Suh, Yong-Moo

    2004-10-01

    Analysis and prediction of the care charges related to colorectal cancer in Korea are important for the allocation of medical resources and the establishment of medical policies because the incidence and the hospital charges for colorectal cancer are rapidly increasing. But the previous studies based on statistical analysis to predict the hospital charges for patients did not show satisfactory results. Recently, data mining emerges as a new technique to extract knowledge from the huge and diverse medical data. Thus, we built models using data mining techniques to predict hospital charge for the patients. A total of 1,022 admission records with 154 variables of 492 patients were used to build prediction models who had been treated from 1999 to 2002 in the Kyung Hee University Hospital. We built an artificial neural network (ANN) model and a classification and regression tree (CART) model, and compared their prediction accuracy. Linear correlation coefficients were high in both models and the mean absolute errors were similar. But ANN models showed a better linear correlation than CART model (0.813 vs. 0.713 for the hospital charge paid by insurance and 0.746 vs. 0.720 for the hospital charge paid by patients). We suggest that ANN model has a better performance to predict charges of colorectal cancer patients.

  14. An international model to predict recurrent cardiovascular disease.

    Science.gov (United States)

    Wilson, Peter W F; D'Agostino, Ralph; Bhatt, Deepak L; Eagle, Kim; Pencina, Michael J; Smith, Sidney C; Alberts, Mark J; Dallongeville, Jean; Goto, Shinya; Hirsch, Alan T; Liau, Chiau-Suong; Ohman, E Magnus; Röther, Joachim; Reid, Christopher; Mas, Jean-Louis; Steg, Ph Gabriel

    2012-07-01

    Prediction models for cardiovascular events and cardiovascular death in patients with established cardiovascular disease are not generally available. Participants from the prospective REduction of Atherothrombosis for Continued Health (REACH) Registry provided a global outpatient population with known cardiovascular disease at entry. Cardiovascular prediction models were estimated from the 2-year follow-up data of 49,689 participants from around the world. A developmental prediction model was estimated from 33,419 randomly selected participants (2394 cardiovascular events with 1029 cardiovascular deaths) from the pool of 49,689. The number of vascular beds with clinical disease, diabetes, smoking, low body mass index, history of atrial fibrillation, cardiac failure, and history of cardiovascular event(s) <1 year before baseline examination increased risk of a subsequent cardiovascular event. Statin (hazard ratio 0.75; 95% confidence interval, 0.69-0.82) and acetylsalicylic acid therapy (hazard ratio 0.90; 95% confidence interval, 0.83-0.99) also were significantly associated with reduced risk of cardiovascular events. The prediction model was validated in the remaining 16,270 REACH subjects (1172 cardiovascular events, 494 cardiovascular deaths). Risk of cardiovascular death was similarly estimated with the same set of risk factors. Simple algorithms were developed for prediction of overall cardiovascular events and for cardiovascular death. This study establishes and validates a risk model to predict secondary cardiovascular events and cardiovascular death in outpatients with established atherothrombotic disease. Traditional risk factors, burden of disease, lack of treatment, and geographic location all are related to an increased risk of subsequent cardiovascular morbidity and cardiovascular mortality. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  16. Advective transport in heterogeneous aquifers: Are proxy models predictive?

    Science.gov (United States)

    Fiori, A.; Zarlenga, A.; Gotovac, H.; Jankovic, I.; Volpi, E.; Cvetkovic, V.; Dagan, G.

    2015-12-01

    We examine the prediction capability of two approximate models (Multi-Rate Mass Transfer (MRMT) and Continuous Time Random Walk (CTRW)) of non-Fickian transport, by comparison with accurate 2-D and 3-D numerical simulations. Both nonlocal in time approaches circumvent the need to solve the flow and transport equations by using proxy models to advection, providing the breakthrough curves (BTC) at control planes at any x, depending on a vector of five unknown parameters. Although underlain by different mechanisms, the two models have an identical structure in the Laplace Transform domain and have the Markovian property of independent transitions. We show that also the numerical BTCs enjoy the Markovian property. Following the procedure recommended in the literature, along a practitioner perspective, we first calibrate the parameters values by a best fit with the numerical BTC at a control plane at x1, close to the injection plane, and subsequently use it for prediction at further control planes for a few values of σY2≤8. Due to a similar structure and Markovian property, the two methods perform equally well in matching the numerical BTC. The identified parameters are generally not unique, making their identification somewhat arbitrary. The inverse Gaussian model and the recently developed Multi-Indicator Model (MIM), which does not require any fitting as it relates the BTC to the permeability structure, are also discussed. The application of the proxy models for prediction requires carrying out transport field tests of large plumes for a long duration.

  17. A scientometric prediction of the discovery of the first potentially habitable planet with a mass similar to Earth.

    Directory of Open Access Journals (Sweden)

    Samuel Arbesman

    Full Text Available BACKGROUND: The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. METHODOLOGY/PRINCIPAL FINDINGS: Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. CONCLUSIONS/SIGNIFICANCE: Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields.

  18. A scientometric prediction of the discovery of the first potentially habitable planet with a mass similar to Earth.

    Science.gov (United States)

    Arbesman, Samuel; Laughlin, Gregory

    2010-10-04

    The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields.

  19. eF-seek: prediction of the functional sites of proteins by searching for similar electrostatic potential and molecular surface shape

    OpenAIRE

    Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki

    2007-01-01

    We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a ...

  20. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fittin...... method seems to work well over the entire range of the Hurst (1951) parameter...

  1. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  2. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  3. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  4. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  5. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  6. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  7. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  8. Structure-semantics interplay in complex networks and its effects on the predictability of similarity in texts

    Science.gov (United States)

    Amancio, Diego R.; Oliveira, Osvaldo N., Jr.; Costa, Luciano da F.

    2012-09-01

    The classification of texts has become a major endeavor with so much electronic material available, for it is an essential task in several applications, including search engines and information retrieval. There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it

  9. A transport model for prediction of wildfire behavior

    Energy Technology Data Exchange (ETDEWEB)

    Linn, R.R.

    1997-07-01

    Wildfires are a threat to human life and property, yet they are an unavoidable part of nature. In the past people have tried to predict wildfire behavior through the use of point functional models but have been unsuccessful at adequately predicting the gross behavior of the broad spectrum of fires that occur in nature. The majority of previous models do not have self-determining propagation rates. The author uses a transport approach to represent this complicated problem and produce a model that utilizes a self-determining propagation rate. The transport approach allows one to represent a large number of environments including transition regions such as those with nonhomogeneous vegetation and terrain. Some of the most difficult features to treat are the imperfectly known boundary conditions and the fine scale structure that is unresolvable, such as the specific location of the fuel or the precise incoming winds. The author accounts for the microscopic details of a fire with macroscopic resolution by dividing quantities into mean and fluctuating parts similar to what is done in traditional turbulence modelling. The author develops a complicated model that includes the transport of multiple gas species, such as oxygen and volatile hydrocarbons, and tracks the depletion of various fuels and other stationary solids and liquids. From this model the author also forms a simplified local burning model with which he performs a number of simulations for the purpose of demonstrating the properties of a self-determining transport-based wildfire model.

  10. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  11. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  12. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  13. Estimated carotid-femoral pulse wave velocity has similar predictive value as measured carotid-femoral pulse wave velocity

    DEFF Research Database (Denmark)

    Greve, Sara V; Blicher, Marie K; Kruger, Ruan

    2016-01-01

    BACKGROUND: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood...... pressure, and previously published equations. The aim of this study was to investigate whether ePWV could predict cardiovascular events independently of traditional cardiovascular risk factors and/or cfPWV. METHOD: cfPWV was measured and ePWV was calculated in 2366 patients from four age groups...... healthy patients, ePWV and cfPWV (per SD) added independently to SCORE in prediction of combined endpoint [hazard ratio (95%CI) = 1.38(1.09-1.76) and hazard ratio (95%CI) = 1.18(1.01-1.38)] and to FRS [hazard ratio (95%CI) = 1.33(1.06-1.66) and hazard ratio (95%CI) = 1.16(0.99-1.37)]. If healthy patients...

  14. Estimated carotid-femoral pulse wave velocity has similar predictive value as measured carotid-femoral pulse wave velocity

    DEFF Research Database (Denmark)

    Olsen, Michael; Greve, Sara; Blicher, Marie

    2016-01-01

    OBJECTIVE: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular (CV) risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood...... pressure and previously published equations. The aim of this study was to investigate whether ePWV could predict CV events independently of traditional cardiovascular risk factors and/or cfPWV. DESIGN AND METHOD: cfPWV was measured and ePWV calculated in 2366 apparently healthy subjects from four age......, ePVW and cfPWV (per standard deviation) added independently to SCORE in prediction of CEP (HR [95% CI] = 1.38[1.09-1.76] and HR [95% CI] = 1.18[1.01-1.38]) and to FRS (HR [95% CI] = 1.33[1.06-1.66] and HR [95% CI] = 1.16[0.99-1.37]). If healthy subjects with ePWV and/or cfPWV ≥ 10 m/s were...

  15. Predicting third molar surgery operative time: a validated model.

    Science.gov (United States)

    Susarla, Srinivas M; Dodson, Thomas B

    2013-01-01

    The purpose of the present study was to develop and validate a statistical model to predict third molar (M3) operative time. This was a prospective cohort study consisting of a sample of subjects presenting for M3 removal. The demographic, anatomic, and operative variables were recorded for each subject. Using an index sample of randomly selected subjects, a multiple linear regression model was generated to predict the operating time. A nonoverlapping group of randomly selected subjects (validation sample) was used to assess model accuracy. P≤.05 was considered significant. The sample was composed of 150 subjects (n) who had 450 (k) M3s removed. The index sample (n=100 subjects, k=313 M3s extracted) had a mean age of 25.4±10.0 years. The mean extraction time was 6.4±7.0 minutes. The multiple linear regression model included M3 location, Winter's classification, tooth morphology, number of teeth extracted, procedure type, and surgical experience (R2=0.58). No statistically significant differences were seen between the index sample and the validation sample (n=50, k=137) for any of the study variables. Compared with the index model, the β-coefficients of the validation model were similar in direction and magnitude for most variables. Compared with the observed extraction time for all teeth in the sample, the predicted extraction time was not significantly different (P=.16). Fair agreement was seen between the β-coefficients for our multiple models in the index and validation populations, with no significant difference in the predicted and observed operating times. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  17. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  18. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  19. Molecular Basis of LFER Modelling of Electronic Substituent Effect Using Fragment Quantum Self-Similarity Measures

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Carbó-Dorca, R.; Ponec, Robert

    2003-01-01

    Roč. 43, č. 6 (2003), s. 2033-2039 ISSN 0095-2338 R&D Projects: GA MŠk OC D9.20 Institutional research plan: CEZ:AV0Z4072921 Keywords : hammett sigma constants * molecular similarity * fragment self-similarity measures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.078, year: 2003

  20. Do similarities or differences between CEO leadership and organizational culture have a more positive effect on firm performance? A test of competing predictions.

    Science.gov (United States)

    Hartnell, Chad A; Kinicki, Angelo J; Lambert, Lisa Schurer; Fugate, Mel; Doyle Corner, Patricia

    2016-06-01

    This study examines the nature of the interaction between CEO leadership and organizational culture using 2 common metathemes (task and relationship) in leadership and culture research. Two perspectives, similarity and dissimilarity, offer competing predictions about the fit, or interaction, between leadership and culture and its predicted effect on firm performance. Predictions for the similarity perspective draw upon attribution theory and social identity theory of leadership, whereas predictions for the dissimilarity perspective are developed based upon insights from leadership contingency theories and the notion of substitutability. Hierarchical regression results from 114 CEOs and 324 top management team (TMT) members failed to support the similarity hypotheses but revealed broad support for the dissimilarity predictions. Findings suggest that culture can serve as a substitute for leadership when leadership behaviors are redundant with cultural values (i.e., they both share a task- or relationship-oriented focus). Findings also support leadership contingency theories indicating that CEO leadership is effective when it provides psychological and motivational resources lacking in the organization's culture. We discuss theoretical and practical implications and delineate directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  2. Assessment of factors influencing finite element vertebral model predictions.

    Science.gov (United States)

    Jones, Alison C; Wilcox, Ruth K

    2007-12-01

    This study aimed to establish model construction and configuration procedures for future vertebral finite element analysis by studying convergence, sensitivity, and accuracy behaviors of semiautomatically generated models and comparing the results with manually generated models. During a previous study, six porcine vertebral bodies were imaged using a microcomputed tomography scanner and tested in axial compression to establish their stiffness and failure strength. Finite element models were built using a manual meshing method. In this study, the experimental agreement of those models was compared with that of semiautomatically generated models of the same six vertebrae. Both manually and semiautomatically generated models were assigned gray-scale-based, element-specific material properties. The convergence of the semiautomatically generated models was analyzed for the complete models along with material property and architecture control cases. A sensitivity study was also undertaken to test the reaction of the models to changes in material property values, architecture, and boundary conditions. In control cases, the element-specific material properties reduce the convergence of the models in comparison to homogeneous models. However, the full vertebral models showed strong convergence characteristics. The sensitivity study revealed a significant reaction to changes in architecture, boundary conditions, and load position, while the sensitivity to changes in material property values was proportional. The semiautomatically generated models produced stiffness and strength predictions of similar accuracy to the manually generated models with much shorter image segmentation and meshing times. Semiautomatic methods can provide a more rapid alternative to manual mesh generation techniques and produce vertebral models of similar accuracy. The representation of the boundary conditions, load position, and surrounding environment is crucial to the accurate prediction of the

  3. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  4. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  5. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  6. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  7. Ethnic differences in the effects of media on body image: the effects of priming with ethnically different or similar models.

    Science.gov (United States)

    Bruns, Gina L; Carter, Michele M

    2015-04-01

    Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Testing the Effects of Team Processes on Team Member Schema Similarity and Team Performance: Examination of the Team Member Schema Similarity Model

    National Research Council Canada - National Science Library

    Rentsch, Joan

    1998-01-01

    .... Team membership influences and team interaction processes were examined as antecedents to team member teamwork schema similarity, which was conceptualized as team member teamwork schema agreement and accuracy...

  9. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.

    2014-12-01

    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.

  10. Similar circuits but different connectivity patterns between the cerebellum, basal ganglia, and supplementary motor area in early Parkinson's disease patients and controls during predictive motor timing.

    Science.gov (United States)

    Husárová, Ivica; Mikl, Michal; Lungu, Ovidiu V; Mareček, Radek; Vaníček, Jiří; Bareš, Martin

    2013-10-01

    The cerebellum, basal ganglia (BG), and other cortical regions, such as supplementary motor area (SMA) have emerged as important structures dealing with various aspects of timing, yet the modulation of functional connectivity between them during motor timing tasks remains unexplored. We used dynamic causal modeling to investigate the differences in effective connectivity (EC) between these regions and its modulation by behavioral outcome during a motor timing prediction task in a group of 16 patients with early Parkinson's disease (PD) and 17 healthy controls. Behavioral events (hits and errors) constituted the driving input connected to the cerebellum, and the modulation in connectivity was assessed relative to the hit condition (successful interception of target). The driving input elicited response in the target area, while modulatory input changed the specific connection strength. The neuroimaging data revealed similar structure of intrinsic connectivity in both groups with unidirectional connections from cerebellum to both sides of the BG, from BG to the SMA, and then from SMA to the cerebellum. However, the type of intrinsic connection was different between two groups. In the PD group, the connection between the SMA and cerebellum was inhibitory in comparison to the HC group, where the connection was activated. Furthermore, the modulation of connectivity by the performance in the task was different between the two groups, with decreased connectivity between the cerebellum and left BG and SMA and a more pronounced symmetry of these connections in controls. In the same time, there was an increased EC between the cerebellum and both sides of BG with more pronounced asymmetry (stronger connection with left BG) in patients. In addition, in the PD group the modulatory input strengthened inhibitory connectivity between the SMA and the cerebellum, while in the HC group the excitatory connection was slightly strengthened. Our findings indicate that although early PD

  11. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  12. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. PROPRIEDADES TERMOFÍSICAS DE SOLUÇÕES MODELO SIMILARES A CREME DE LEITE THERMOPHYSICAL PROPERTIES OF MODEL SOLUTIONS SIMILAR TO CREAM

    Directory of Open Access Journals (Sweden)

    Silvia Cristina Sobottka Rolim de MOURA

    2001-08-01

    Full Text Available A demanda de creme de leite UHT tem aumentado significativamente. Diversas empresas diversificaram e aumentaram sua produção, visto que o consumidor, cada vez mais exigente, almeja cremes com ampla faixa de teor de gordura. O objetivo do presente trabalho foi determinar a densidade, viscosidade aparente e difusividade térmica, de soluções modelo similares a creme de leite, na faixa de temperatura de 30 a 70°C, estudando a influência do teor de gordura e da temperatura nas propriedades físicas dos produtos. O delineamento estatístico aplicado foi o planejamento 3X5, usando níveis de teor de gordura e temperatura fixos em 15%, 25% e 35%; 30°C, 40°C, 50°C, 60°C e 70°C, respectivamente (STATISTICA 6.0. Manteve-se constante a quantidade de carboidrato e de proteína, ambos em 3%. A densidade foi determinada pelo método de deslocamento de fluidos em picnômetro; a difusividade térmica com base no método de Dickerson e a viscosidade aparente foi determinada em reômetro Rheotest 2.1. Os resultados de cada propriedade foram analisados através de método de superfície de resposta. No caso destas propriedades, os dados obtidos apresentaram resultados significativos, indicando que o modelo representou de forma confiável a variação destas propriedades com a variação da gordura (% e da temperatura (°C.The requirement of UHT cream has been increased considerably. Several industries varied and increased their production, since consumers, more and more exigent, are demanding creams with a wide range of fat content. The objective of the present research was to determine the density, viscosity and thermal diffusivity of model solutions similar to cream. The range of temperature varied from 30°C to 70°C in order to study the influence of fat content and temperature in the physical properties of cream. The statistic method applied was the factorial 3X5 planning, with levels of fat content and temperature fixed in 15%, 25% and 35%; 30

  14. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  15. Comparison of the Nosocomial Pneumonia Mortality Prediction (NPMP) model with standard mortality prediction tools.

    Science.gov (United States)

    Srinivasan, M; Shetty, N; Gadekari, S; Thunga, G; Rao, K; Kunhikatta, V

    2017-07-01

    Severity or mortality prediction of nosocomial pneumonia could aid in the effective triage of patients and assisting physicians. To compare various severity assessment scoring systems for predicting intensive care unit (ICU) mortality in nosocomial pneumonia patients. A prospective cohort study was conducted in a tertiary care university-affiliated hospital in Manipal, India. One hundred patients with nosocomial pneumonia, admitted in the ICUs who developed pneumonia after >48h of admission, were included. The Nosocomial Pneumonia Mortality Prediction (NPMP) model, developed in our hospital, was compared with Acute Physiology and Chronic Health Evaluation II (APACHE II), Mortality Probability Model II (MPM 72  II), Simplified Acute Physiology Score II (SAPS II), Multiple Organ Dysfunction Score (MODS), Sequential Organ Failure Assessment (SOFA), Clinical Pulmonary Infection Score (CPIS), Ventilator-Associated Pneumonia Predisposition, Insult, Response, Organ dysfunction (VAP-PIRO). Data and clinical variables were collected on the day of pneumonia diagnosis. The outcome for the study was ICU mortality. The sensitivity and specificity of the various scoring systems was analysed by plotting receiver operating characteristic (ROC) curves and computing the area under the curve for each of the mortality predicting tools. NPMP, APACHE II, SAPS II, MPM 72  II, SOFA, and VAP-PIRO were found to have similar and acceptable discrimination power as assessed by the area under the ROC curve. The AUC values for the above scores ranged from 0.735 to 0.762. CPIS and MODS showed least discrimination. NPMP is a specific tool to predict mortality in nosocomial pneumonia and is comparable to other standard scores. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  16. Two adaptive radiative transfer schemes for numerical weather prediction models

    Directory of Open Access Journals (Sweden)

    V. Venema

    2007-11-01

    Full Text Available Radiative transfer calculations in atmospheric models are computationally expensive, even if based on simplifications such as the δ-two-stream approximation. In most weather prediction models these parameterisation schemes are therefore called infrequently, accepting additional model error due to the persistence assumption between calls. This paper presents two so-called adaptive parameterisation schemes for radiative transfer in a limited area model: A perturbation scheme that exploits temporal correlations and a local-search scheme that mainly takes advantage of spatial correlations. Utilising these correlations and with similar computational resources, the schemes are able to predict the surface net radiative fluxes more accurately than a scheme based on the persistence assumption. An important property of these adaptive schemes is that their accuracy does not decrease much in case of strong reductions in the number of calls to the δ-two-stream scheme. It is hypothesised that the core idea can also be employed in parameterisation schemes for other processes and in other dynamical models.

  17. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  18. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  19. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  20. Approximating a similarity matrix by a latent class model: A reappraisal of additive fuzzy clustering

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Kourmpetis, Y.I.A.; Kiers, H.A.L.; Bink, M.C.A.M.

    2009-01-01

    Let Q be a given n×n square symmetric matrix of nonnegative elements between 0 and 1, similarities. Fuzzy clustering results in fuzzy assignment of individuals to K clusters. In additive fuzzy clustering, the n×K fuzzy memberships matrix P is found by least-squares approximation of the off-diagonal

  1. Approximating a similarity matrix by a latent class model : A reappraisal of additive fuzzy clustering

    NARCIS (Netherlands)

    ter Braak, Cajo J. F.; Kourmpetis, Yiannis; Kiers, Henk A. L.; Bink, Marco C. A. M.

    2009-01-01

    Let Q be a given n x n square symmetric matrix of nonnegative elements between 0 and 1, e.g. similarities. Fuzzy clustering results in fuzzy assignment of individuals to K clusters. In additive fuzzy clustering, the n x K fuzzy memberships matrix P is found by least-squares approximation of the

  2. Molecular Quantum Similarity Measures from Fermi hole Densities: Modeling Hammett Sigma Constants

    Czech Academy of Sciences Publication Activity Database

    Girónes, X.; Ponec, Robert

    2006-01-01

    Roč. 46, č. 3 (2006), s. 1388-1393 ISSN 1549-9596 Grant - others:SMCT(ES) SAF2000/0223/C03/01 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecula quantum similarity measures * fermi hole densities * substituent effect Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.423, year: 2006

  3. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  4. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  5. A new approach to predict the biological activity of molecules based on similarity of their interaction fields and the logP and logD values: application to auxins.

    Science.gov (United States)

    Bertosa, Branimir; Kojić-Prodić, Biserka; Wade, Rebecca C; Ramek, Michael; Piperaki, Stavroula; Tsantili-Kakoulidou, Anna; Tomić, Sanja

    2003-01-01

    The activity of a biological compound is dependent both on specific binding to a target receptor and its ADME (Absorption, Distribution, Metabolism, Excretion) properties. A challenge to predict biological activity is to consider both contributions simultaneously in deriving quantitative models. We present a novel approach to derive QSAR models combining similarity analysis of molecular interaction fields (MIFs) with prediction of logP and/or logD. This new classification method is applied to a set of about 100 compounds related to the auxin plant hormone. The classification based on similarity of their interaction fields is more successful for the indole than the phenoxy compounds. The classification of the phenoxy compounds is however improved by taking into account the influence of the logP and/or the logD values on biological activity. With the new combined method, the majority (8 out of 10) of the previously misclassified derivatives of phenoxy acetic acid are classified in accord with their bioassays. The recently determined crystal structure of the auxin-binding protein 1 (ABP1) enabled validation of our approach. The results of docking a few auxin related compounds with different biological activity to ABP1 correlate well with the classification based on similarity of MIFs only. Biological activity is, however, better predicted by a combined similarity of MIFs + logP/logD approach.

  6. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  7. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  8. Video compressed sensing using iterative self-similarity modeling and residual reconstruction

    Science.gov (United States)

    Kim, Yookyung; Oh, Han; Bilgin, Ali

    2013-04-01

    Compressed sensing (CS) has great potential for use in video data acquisition and storage because it makes it unnecessary to collect an enormous amount of data and to perform the computationally demanding compression process. We propose an effective CS algorithm for video that consists of two iterative stages. In the first stage, frames containing the dominant structure are estimated. These frames are obtained by thresholding the coefficients of similar blocks. In the second stage, refined residual frames are reconstructed from the original measurements and the measurements corresponding to the frames estimated in the first stage. These two stages are iterated until convergence. The proposed algorithm exhibits superior subjective image quality and significantly improves the peak-signal-to-noise ratio and the structural similarity index measure compared to other state-of-the-art CS algorithms.

  9. Complete description of all self-similar models driven by Lévy stable noise

    Science.gov (United States)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina

    2005-01-01

    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  10. Models for predicting objective function weights in prostate cancer IMRT

    International Nuclear Information System (INIS)

    Boutilier, Justin J.; Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.

    2015-01-01

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR

  11. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  12. Patient-centred management in idiopathic pulmonary fibrosis: similar themes in three communication models.

    Science.gov (United States)

    Wuyts, Wim A; Peccatori, Fedro A; Russell, Anne-Marie

    2014-06-01

    The progressive and highly variable course of idiopathic pulmonary fibrosis (IPF) can present patients and their families with various challenges at different points of the disease. Structured communication between the healthcare professional and the patient is vital to ensure the best possible support and treatment for the patient. While research in this area has been limited, an increasing number of studies are emerging that support the role of communication in patients with debilitating and fatal lung diseases. Communication models used in other conditions that share many challenges with IPF, such as cancer, provide important insights for developing specifically designed patient support and communications models in IPF. Three communication models will be described: 1) the patient-centred care model (for oncology); 2) the three pillars of care model (for IPF); and 3) the Brompton model of care (for interstitial lung disease). Themes common to all three models include comprehensive patient education, encouraged patient participation and an accessible healthcare system, all supported by a collaborative provider-patient relationship. The development of effective communication skills is an on-going process and it is recommended to examine communication models used in other chronic diseases. ©ERS 2014.

  13. Decadal prediction skill in a multi-model ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Oldenborgh, Geert Jan van; Wouters, Bert; Hazeleger, Wilco [Royal Netherlands Meteorological Institute (KNMI), P.O. Box 201, De Bilt (Netherlands); Doblas-Reyes, Francisco J. [Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Institut Catala de Ciencies del Clima (IC3), Barcelona (Spain)

    2012-04-15

    Decadal climate predictions may have skill due to predictable components in boundary conditions (mainly greenhouse gas concentrations but also tropospheric and stratospheric aerosol distributions) and initial conditions (mainly the ocean state). We investigate the skill of temperature and precipitation hindcasts from a multi-model ensemble of four climate forecast systems based on coupled ocean-atmosphere models. Regional variations in skill with and without trend are compared with similarly analysed uninitialised experiments to separate the trend due to monotonically increasing forcings from fluctuations around the trend due to the ocean initial state and aerosol forcings. In temperature most of the skill in both multi-model ensembles comes from the externally forced trends. The rise of the global mean temperature is represented well in the initialised hindcasts, but variations around the trend show little skill beyond the first year due to the absence of volcanic aerosols in the hindcasts and the unpredictability of ENSO. The models have non-trivial skill in hindcasts of North Atlantic sea surface temperature beyond the trend. This skill is highest in the northern North Atlantic in initialised experiments and in the subtropical North Atlantic in uninitialised simulations. A similar result is found in the Pacific Ocean, although the signal is less clear. The uninitialised simulations have good skill beyond the trend in the western North Pacific. The initialised experiments show some skill in the decadal ENSO region in the eastern Pacific, in agreement with previous studies. However, the results in this study are not statistically significant (p {approx} 0.1) by themselves. The initialised models also show some skill in forecasting 4-year mean Sahel rainfall at lead times of 1 and 5 years, in agreement with the observed teleconnection from the Atlantic Ocean. Again, the skill is not statistically significant (p {approx} 0.2). Furthermore, uninitialised simulations

  14. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  15. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  16. Possible Implications of a Vortex Gas Model and Self-Similarity for Tornadogenesis and Maintenance

    OpenAIRE

    Dokken, Douglas P.; Scholz, Kurt; Shvartsman, Mikhail M.; Bělík, Pavel; Potvin, Corey; Dahl, Brittany; McGovern, Amy

    2014-01-01

    We describe tornadogenesis and maintenance using the 3-dimensional vortex gas model presented in Chorin (1994) and developed further in Flandoli and Gubinelli (2002). We suggest that high-energy, super-critical vortices in the sense of Benjamin (1962), that have been studied by Fiedler and Rotunno (1986), have negative temperature in the sense of Onsager (1949) play an important role in the model. We speculate that the formation of high-temperature vortices is related to the helicity inherite...

  17. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  18. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  19. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  20. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  1. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  2. An analytic solution of a model of language competition with bilingualism and interlinguistic similarity

    Science.gov (United States)

    Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.

    2013-12-01

    An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.

  3. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  4. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  6. Testing the model-observer similarity hypothesis with text-based worked examples

    NARCIS (Netherlands)

    Hoogerheide, V.; Loyens, S.M.M.; Jadi, Fedora; Vrins, Anna; van Gog, T.

    2017-01-01

    Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the ‘model’) provides a

  7. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  8. In silico peptide-binding predictions of passerine MHC class I reveal similarities across distantly related species, suggesting convergence on the level of protein function

    DEFF Research Database (Denmark)

    Follin, Elna; Karlsson, Maria; Lundegaard, Claus

    2013-01-01

    compared to most mammals. To elucidate the reason for this large number of genes, we compared 14 MHC class I alleles (α1–α3 domains), from great reed warbler, house sparrow and tree sparrow, via phylogenetic analysis, homology modelling and in silico peptide-binding predictions to investigate...

  9. The similarity between optimal precursor and optimally growing initial error in prediction of Kuroshio large meander and its application to targeted observation

    NARCIS (Netherlands)

    Wang, Q; Mu, M.; Dijkstra, H.A.

    2013-01-01

    The links between optimal precursor (OPR) and optimally growing initial error (OGIE) in the predictability studies of Kuroshio large meander (LM) are investigated using the Conditional Nonlinear Optimal Perturbation approach within a 1.5-layer shallow-water model. The OPR is a kind of initial

  10. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  11. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  12. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  13. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  14. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  15. Estradiol and progesterone exhibit similar patterns of hepatic gene expression regulation in the bovine model.

    Directory of Open Access Journals (Sweden)

    Carla A Piccinato

    Full Text Available Female sex steroid hormones, estradiol-17β (E2-17β and progesterone (P4 regulate reproductive function and gene expression in a broad range of tissues. Given the central role of the liver in regulating homeostasis including steroid hormone metabolism, we sought to understand how E2-17β and P4 interact to affect global gene expression in liver. Ovariectomized cows (n = 8 were randomly assigned to 4 treatment groups applied in a replicated Latin Square design: 1 No hormone supplementation, 2 E2-17β treatment (ear implant, 3 P4 treatment (intravaginal inserts, and 4 E2-17β combined with P4. After 14 d of treatment, liver biopsies were collected, allowing 28 d intervals between periods. Changes in gene expression in the liver biopsies were monitored using bovine-specific arrays. Treatment with E2-17β altered expression of 479 genes, P4 472 genes, and combined treatment significantly altered expression of 468 genes. In total, 578 genes exhibited altered expression including a remarkable number (346 genes that responded similarly to E2-17β, P4, or combined treatment. Additional evidence for similar gene expression actions of E2-17ß and/or P4 were: principal component analysis placed almost every treatment array at a substantial distance from controls; Venn diagrams indicated overall treatment effects for most regulated genes; clustering analysis indicated the two major clusters had all treatments up-regulating (172 genes or down-regulating (173 genes expression. Thus, unexpectedly, common biological pathways were regulated by E2-17β and/or P4 in liver. This indicates that the mechanism of action of these steroid hormones in the liver might be either indirect or might occur through non-genomic pathways. This unusual pattern of gene expression in response to steroid hormones is consistent with the idea that there are classical and non-classical tissue-specific responses to steroid hormone actions. Future studies are needed to elucidate

  16. A neural network model for predicting postures during non-repetitive manual materials handling tasks.

    Science.gov (United States)

    Perez, Miguel A; Nussbaum, Maury A

    2008-10-01

    Posture prediction can be useful in facilitating the design and evaluation processes for manual materials handling tasks. This study evaluates the ability of artificial neural network models to predict initial and final lifting postures in 2-D and 3-D scenarios. Descriptors for the participant and condition of interest were input to the models; outputs consisted of posture-defining joint angles. Models were trained with subsets of an existing posture database before predictions were generated. Trained models predictions were then evaluated using the remaining data, which included conditions not presented during training. Prediction errors were consistent across these data subsets, suggesting the models generalised well to novel conditions. The models generally predicted whole-body postures with per-joint errors in the 5 degrees -20 degrees range, though some errors were larger, particularly for 3-D conditions. These models provided reasonably accurate predictions, even outperforming some computational approaches previously proposed for similar purposes. Suggestions for future refinement of such models are presented. The models in this investigation provide a means to predict initial and final postures in commonly occurring manual materials handling tasks. In addition, the model structures provide information about potential lifting strategies that may be used by individuals with particular anthropometry or strength characteristics.

  17. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  18. Multi-Layer Identification of Highly-Potent ABCA1 Up-Regulators Targeting LXRβ Using Multiple QSAR Modeling, Structural Similarity Analysis, and Molecular Docking

    Directory of Open Access Journals (Sweden)

    Meimei Chen

    2016-11-01

    Full Text Available In this study, in silico approaches, including multiple QSAR modeling, structural similarity analysis, and molecular docking, were applied to develop QSAR classification models as a fast screening tool for identifying highly-potent ABCA1 up-regulators targeting LXRβ based on a series of new flavonoids. Initially, four modeling approaches, including linear discriminant analysis, support vector machine, radial basis function neural network, and classification and regression trees, were applied to construct different QSAR classification models. The statistics results indicated that these four kinds of QSAR models were powerful tools for screening highly potent ABCA1 up-regulators. Then, a consensus QSAR model was developed by combining the predictions from these four models. To discover new ABCA1 up-regulators at maximum accuracy, the compounds in the ZINC database that fulfilled the requirement of structural similarity of 0.7 compared to known potent ABCA1 up-regulator were subjected to the consensus QSAR model, which led to the discovery of 50 compounds. Finally, they were docked into the LXRβ binding site to understand their role in up-regulating ABCA1 expression. The excellent binding modes and docking scores of 10 hit compounds suggested they were highly-potent ABCA1 up-regulators targeting LXRβ. Overall, this study provided an effective strategy to discover highly potent ABCA1 up-regulators.

  19. The Rigorous Model for Similarity Transformation under Intra-frame and Inter-frame Covariance

    Directory of Open Access Journals (Sweden)

    ZENG Anmin

    2017-01-01

    Full Text Available The coordinates are obtained from observations by using least-squares method, so their precision should be contaminated by observation errors and the covariance also exists between common points and non-common points. The coordinate errors don't only exist in the initial frame but also in the target frame. But the classical stepwise approach for coordinate frame transformation usually takes the coordinate errors of the initial frame into account and overlooks the stochastic correlation between common points and non-common points. A new rigorous unified model is proposed for coordinate frame transformation that takes into account both the errors of all coordinates in both fame and inter-frame coordinate covariance between common points and non-common points, and the corresponding estimator for the transformed coordinates are derived and involve appropriate corrections to the standard approach, in which the transformation parameters and the transformed coordinates for all points are computed in a single-step least squares approach. The inter frame coordinate covariance should be consistent to their uncertainties, but in practice their uncertainties are not consistent. To balance the covariance matrices of both frames, a new adaptive estimator for the unified model is thus derived in which the corresponding adaptive factor is constructed by the ratio computed by Helmert covariance component estimation, reasonable and consistent covariance matrices are arrived through the adjustment of the adaptive factor. Finally, an actual experiments with 2000 points from the crustal movement observation network of China (abbreviated CMONOC is carried out to verify the implement of the new model, the results show that the proposed model can significantly improve the precision of the coordinate transformation.

  20. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  1. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  2. Re-parametrization of a swine model to predict growth performance of broilers

    OpenAIRE

    Dukhta, G.; van Milgen, Jacob; Kövér, G.; Halas, V.

    2017-01-01

    The aim of the study was to investigate whether a pig growth model is suitable to be modified and adapted for broilers. As monogastric animals, pigs and poultry share many similarities in their digestion and metabolism, many structures (body protein and lipid stores) and the nutrient flows of the underlying metabolic pathways are similar among species. For that purpose, the InraPorc model was used as a basis to predict growth performance and body composition at slaughter in broilers. First...

  3. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  4. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  5. Korean risk assessment model for breast cancer risk prediction.

    Directory of Open Access Journals (Sweden)

    Boyoung Park

    Full Text Available PURPOSE: We evaluated the performance of the Gail model for a Korean population and developed a Korean breast cancer risk assessment tool (KoBCRAT based upon equations developed for the Gail model for predicting breast cancer risk. METHODS: Using 3,789 sets of cases and controls, risk factors for breast cancer among Koreans were identified. Individual probabilities were projected using Gail's equations and Korean hazard data. We compared the 5-year and lifetime risk produced using the modified Gail model which applied Korean incidence and mortality data and the parameter estimators from the original Gail model with those produced using the KoBCRAT. We validated the KoBCRAT based on the expected/observed breast cancer incidence and area under the curve (AUC using two Korean cohorts: the Korean Multicenter Cancer Cohort (KMCC and National Cancer Center (NCC cohort. RESULTS: The major risk factors under the age of 50 were family history, age at menarche, age at first full-term pregnancy, menopausal status, breastfeeding duration, oral contraceptive usage, and exercise, while those at and over the age of 50 were family history, age at menarche, age at menopause, pregnancy experience, body mass index, oral contraceptive usage, and exercise. The modified Gail model produced lower 5-year risk for the cases than for the controls (p = 0.017, while the KoBCRAT produced higher 5-year and lifetime risk for the cases than for the controls (p<0.001 and <0.001, respectively. The observed incidence of breast cancer in the two cohorts was similar to the expected incidence from the KoBCRAT (KMCC, p = 0.880; NCC, p = 0.878. The AUC using the KoBCRAT was 0.61 for the KMCC and 0.89 for the NCC cohort. CONCLUSIONS: Our findings suggest that the KoBCRAT is a better tool for predicting the risk of breast cancer in Korean women, especially urban women.

  6. Linking accretion flow and particle acceleration in jets - II. Self-similar jet models with full relativistic MHD gravitational mass

    NARCIS (Netherlands)

    Polko, P.; Meier, D.L.; Markoff, S.

    2014-01-01

    We present a new, semi-analytic formalism to model the acceleration and collimation of relativistic jets in a gravitational potential. The gravitational energy density includes the kinetic, thermal and electromagnetic mass contributions. The solutions are close to self-similar throughout the

  7. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  8. SIMILARITIES BETWEEN THE KNOWLEDGE CREATION AND CONVERSION MODEL AND THE COMPETING VALUES FRAMEWORK: AN INTEGRATIVE APPROACH

    Directory of Open Access Journals (Sweden)

    PAULO COSTA

    2016-12-01

    Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.

  9. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  10. Explicit modeling of ancestry improves polygenic risk scores and BLUP prediction

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J.; Kraft, Peter; Price, Alkes L.

    2016-01-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color, tanning ability and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRS) and Best Linear Unbiased Prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R2 for hair color increased by 66% (0.0456 to 0.0755; pancestry, which prevents ancestry effects from entering into each SNP effect and being over-weighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. PMID:25995153

  11. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  12. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  13. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  14. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  15. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  16. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    Science.gov (United States)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2017-06-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S

  17. Reducing Uncertainty in Chemistry Climate Model Predictions of Stratospheric Ozone

    Science.gov (United States)

    Douglass, A. R.; Strahan, S. E.; Oman, L. D.; Stolarski, R. S.

    2014-01-01

    Chemistry climate models (CCMs) are used to predict the future evolution of stratospheric ozone as ozone-depleting substances decrease and greenhouse gases increase, cooling the stratosphere. CCM predictions exhibit many common features, but also a broad range of values for quantities such as year of ozone-return-to-1980 and global ozone level at the end of the 21st century. Multiple linear regression is applied to each of 14 CCMs to separate ozone response to chlorine change from that due to climate change. We show that the sensitivity of lower atmosphere ozone to chlorine change deltaO3/deltaCly is a near linear function of partitioning of total inorganic chlorine (Cly) into its reservoirs; both Cly and its partitioning are controlled by lower atmospheric transport. CCMs with realistic transport agree with observations for chlorine reservoirs and produce similar ozone responses to chlorine change. After 2035 differences in response to chlorine contribute little to the spread in CCM results as the anthropogenic contribution to Cly becomes unimportant. Differences among upper stratospheric ozone increases due to temperature decreases are explained by differences in ozone sensitivity to temperature change deltaO3/deltaT due to different contributions from various ozone loss processes, each with their own temperature dependence. In the lower atmosphere, tropical ozone decreases caused by a predicted speed-up in the Brewer-Dobson circulation may or may not be balanced by middle and high latitude increases, contributing most to the spread in late 21st century predictions.

  18. Circumplex models for the similarity relationships between higher-order factors of personality and personality disorders: an empirical analysis.

    Science.gov (United States)

    Pukrop, R; Sass, H; Steinmeyer, E M

    2000-01-01

    Similarity relationships between personality factors and personality disorders (PDs) are usually described within the conceptual framework of the "big five" model. Recently, two-dimensional circumplex models have been suggested as alternatives, such as the interpersonal circle, the multifacet circumplex, and the circumplex of premorbid personality types. The present study is an empirical investigation of the similarity relationships between the big five, the 11 DSM-III-R personality disorders and four subaffective disorders. This was performed in a sample of 165 psychiatric inpatients. We tested the extent to which the relationships could be adequately represented in two dimensions and which circumplex model can be supported by the empirical configuration. Results obtained by principal-components analysis (PCA) strongly confirm the circumplex of premorbid personality, and to some extent the multifacet circumplex. However, the interpersonal circle cannot be confirmed.

  19. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  20. Similarity reduction of a three-dimensional model of the far turbulent wake behind a towed body

    Science.gov (United States)

    Schmidt, Alexey

    2011-12-01

    Semi-empirical three-dimensional model of turbulence in the approximation of the far turbulent wake behind a towed body in a passively stratified medium is considered. The sought-for quantities of the model are the velocity defect, kinetic turbulent energy, kinetic energy dissipation rate, averaged density defect and density fluctuation variance. The full group of transformations admitted by this model is found. The governing equations are reduced into ordinary differential equations by similarity reduction and method of the B-determining equations (BDE method). System of ordinary differential equations was solved numerically. The obtained solutions agree with experimental data.

  1. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  2. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  3. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  4. A predictive model to differentiate dengue from other febrile illness.

    Science.gov (United States)

    Fernández, Eduardo; Smieja, Marek; Walter, Stephen D; Loeb, Mark

    2016-11-22

    Dengue is a major public health problem in tropical and subtropical countries and has a presentation similar to other febrile illnesses. Since laboratory confirmation is frequently delayed, the majority of dengue cases are diagnosed based on symptoms. The objective of this study was to identify clinical, hematological and demographical parameters that could be used as predictors of dengue fever among patients with febrile illness. We conducted a retrospective cohort study of 548 patients presenting with febrile syndrome to the largest public hospitals in Honduras. Patients' clinical, laboratory, and demographic data as well as dengue laboratory detection by either serology or viral isolation were used to build a predictive statistical model to identify dengue cases. Of 548 patients, 390 were confirmed with dengue infection while 158 had negative results. Univariable analysis revealed seven variables associated with dengue: male sex, petechiae, skin rash, myalgia, retro-ocular pain, positive tourniquet test, and gingival bleeding. In multivariable logistic regression analysis, retro-ocular pain petechiae and gingival bleeding were associated with increased risk, while epistaxis and paleness of skin were associated with reduced risk of dengue. Using a value of 0.6 (i.e., 60% probability for a case to be positive based on the equation values), our model had a sensitivity of 86.2%, a specificity of 27.2%, and an overall accuracy of 69.2%; allowing for the diagnosis of dengue to be ruled out and for other febrile conditions to be investigated. Among Honduran patients presenting with febrile illness, our analysis identified key symptoms associated with dengue fever, however the overall accuracy of our model was still low and specificity remains a concern. Our model requires validation in other populations with a similar pattern of dengue transmission.

  5. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  6. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  7. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  8. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  9. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  10. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  11. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  12. HCV kinetic and modeling analyses indicate similar time to cure among sofosbuvir combination regimens with daclatasvir, simeprevir or ledipasvir.

    Science.gov (United States)

    Dahari, Harel; Canini, Laetitia; Graw, Frederik; Uprichard, Susan L; Araújo, Evaldo S A; Penaranda, Guillaume; Coquet, Emilie; Chiche, Laurent; Riso, Aurelie; Renou, Christophe; Bourliere, Marc; Cotler, Scott J; Halfon, Philippe

    2016-06-01

    Recent clinical trials of direct-acting-antiviral agents (DAAs) against hepatitis C virus (HCV) achieved >90% sustained virological response (SVR) rates, suggesting that cure often took place before the end of treatment (EOT). We sought to evaluate retrospectively whether early response kinetics can provide the basis to individualize therapy to achieve optimal results while reducing duration and cost. 58 chronic HCV patients were treated with 12-week sofosbuvir+simeprevir (n=19), sofosbuvir+daclatasvir (n=19), or sofosbuvir+ledipasvir in three French referral centers. HCV was measured at baseline, day 2, every other week, EOT and 12weeks post EOT. Mathematical modeling was used to predict the time to cure, i.e., <1 virus copy in the entire extracellular body fluid. All but one patient who relapsed achieved SVR. Mean age was 60±11years, 53% were male, 86% HCV genotype-1, 9% HIV coinfected, 43% advanced fibrosis (F3), and 57% had cirrhosis. At weeks 2, 4 and 6, 48%, 88% and 100% of patients had HCV<15IU/ml, with 27%, 74% and 91% of observations having target not detected, respectively. Modeling results predicted that 23 (43%), 16 (30%), 7 (13%), 5 (9%) and 3 (5%) subjects were predicted to reach cure within 6, 8, 10, 12 and 13weeks of therapy, respectively. The modeling suggested that the patient who relapsed would have benefitted from an additional week of sofosbuvir+ledipasvir. Adjusting duration of treatment according to the modeling predicts reduced medication costs of 43-45% and 17-30% in subjects who had HCV<15IU/ml at weeks 2 and 4, respectively. The use of early viral kinetic analysis has the potential to individualize duration of DAA therapy with a projected average cost saving of 16-20% per 100-treated persons. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  13. The Impact of Protein Structure and Sequence Similarity on the Accuracy of Machine-Learning Scoring Functions for Binding Affinity Prediction.

    Science.gov (United States)

    Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J

    2018-03-14

    It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.

  14. Accuracy of some simple models for predicting particulate interception and retention in agricultural systems

    International Nuclear Information System (INIS)

    Pinder, J.E. III; McLeod, K.W.; Adriano, D.C.

    1989-01-01

    The accuracy of three radionuclide transfer models for predicting the interception and retention of airborne particles by agricultural crops was tested using Pu-bearing aerosols released to the atmosphere from nuclear fuel facilities on the U.S. Department of Energy's Savannah River Plant, near Aiken, SC. The models evaluated were: (1) NRC, the model defined in U.S. Nuclear Regulatory Guide 1.109; (2) FOOD, a model similar to the NRC model that also predicts concentrations in grains; and (3) AGNS, a model developed from the NRC model for the southeastern United States. Plutonium concentrations in vegetation and grain were predicted from measured deposition rates and compared to concentrations observed in the field. Crops included wheat, soybeans, corn and cabbage. Although predictions of the three models differed by less than a factor of 4, they showed different abilities to predict concentrations observed in the field. The NRC and FOOD models consistently underpredicted the observed Pu concentrations for vegetation. The AGNS model was a more accurate predictor of Pu concentrations for vegetation. Both the FOOD and AGNS models accurately predicted the Pu concentrations for grains

  15. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  16. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  17. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  18. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  19. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  20. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  1. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  2. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  3. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  4. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  5. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  6. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  7. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  8. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  9. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  10. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  11. Markov chain-based promoter structure modeling for tissue-specific expression pattern prediction.

    Science.gov (United States)

    Vandenbon, Alexis; Miyamoto, Yuki; Takimoto, Noriko; Kusakabe, Takehiro; Nakai, Kenta

    2008-02-29

    Transcriptional regulation is the first level of regulation of gene expression and is therefore a major topic in computational biology. Genes with similar expression patterns can be assumed to be co-regulated at the transcriptional level by promoter sequences with a similar structure. Current approaches for modeling shared regulatory features tend to focus mainly on clustering of cis-regulatory sites. Here we introduce a Markov chain-based promoter structure model that uses both shared motifs and shared features from an input set of promoter sequences to predict candidate genes with similar expression. The model uses positional preference, order, and orientation of motifs. The trained model is used to score a genomic set of promoter sequences: high-scoring promoters are assumed to have a structure similar to the input sequences and are thus expected to drive similar expression patterns. We applied our model on two datasets in Caenorhabditis elegans and in Ciona intestinalis. Both computational and experimental verifications indicate that this model is capable of predicting candidate promoters driving similar expression patterns as the input-regulatory sequences. This model can be useful for finding promising candidate genes for wet-lab experiments and for increasing our understanding of transcriptional regulation.

  12. A Global Model for Regional Phase Amplitude Prediction

    Science.gov (United States)

    Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Yang, X.; Ballard, S.; Rautian, T. G.

    2013-12-01

    We use two-dimensional (2-D) models of regional phase attenuation, and absolute site effects, to predict amplitudes for use in high frequency discrimination and yield estimation schemes. We have shown that 2-D corrections reduce scatter in P/S ratios, thus improve discrimination power. This is especially important for intermediate frequencies (2-6 Hz), which travel further than the higher frequencies that are typically used for discrimination. Previous work has focused on national priorities; however, for use by the international community, attenuation and site models must cover as much of the globe as possible. New amplitude quality control (QC) methods facilitate this effort. The most important step is to cluster events spatially, take ratios to remove path and site effects, and require the relative amplitudes to match predictions from an earthquake source model with variable moment and corner frequency. Data can then be stacked to form summary amplitudes for each cluster. We perform similar QC and stacking operations for multiple channels at each station, and for closely spaced stations. Data are inverted using a simultaneous multi-band, multi-phase approach that employs absolute spectral constraints on well-studied earthquakes. Global parameterization is obtained using publically available GeoTess software that allows for variable grid spacing. Attenuation results show remarkable, high-resolution correlation with regional geology and heat flow. Our data set includes regional explosion amplitudes from many sources, including LLNL and Leo Brady data for North America, and Borovoye Archive and ChISS data for Asia. We see dramatic improvement in high frequency P/S discrimination, world wide, after correcting for 2-D path and site effects.

  13. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  14. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  15. Predictive models of poly(ethylene-terephthalate) film degradation under multi-factor accelerated weathering exposures.

    Science.gov (United States)

    Gok, Abdulkerim; Ngendahimana, David K; Fagerholm, Cara L; French, Roger H; Sun, Jiayang; Bruckman, Laura S

    2017-01-01

    Accelerated weathering exposures were performed on poly(ethylene-terephthalate) (PET) films. Longitudinal multi-level predictive models as a function of PET grades and exposure types were developed for the change in yellowness index (YI) and haze (%). Exposures with similar change in YI were modeled using a linear fixed-effects modeling approach. Due to the complex nature of haze formation, measurement uncertainty, and the differences in the samples' responses, the change in haze (%) depended on individual samples' responses and a linear mixed-effects modeling approach was used. When compared to fixed-effects models, the addition of random effects in the haze formation models significantly increased the variance explained. For both modeling approaches, diagnostic plots confirmed independence and homogeneity with normally distributed residual errors. Predictive R2 values for true prediction error and predictive power of the models demonstrated that the models were not subject to over-fitting. These models enable prediction under pre-defined exposure conditions for a given exposure time (or photo-dosage in case of UV light exposure). PET degradation under cyclic exposures combining UV light and condensing humidity is caused by photolytic and hydrolytic mechanisms causing yellowing and haze formation. Quantitative knowledge of these degradation pathways enable cross-correlation of these lab-based exposures with real-world conditions for service life prediction.

  16. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  17. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  18. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  19. Comparison of Predictive Models for the Early Diagnosis of Diabetes.

    Science.gov (United States)

    Jahani, Meysam; Mahdavi, Mahdi

    2016-04-01

    This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes.

  20. Function Modelling Of The Market And Assessing The Degree Of Similarity Between Real Properties - Dependent Or Independent Procedures In The Process Of Office Property Valuation

    Directory of Open Access Journals (Sweden)

    Barańska Anna

    2015-09-01

    Full Text Available Referring to the developed and presented in previous publications (e.g. Barańska 2011 two-stage algorithm for real estate valuation, this article addresses the problem of the relationship between the two stages of the algorithm. An essential part of the first stage is the multi-dimensional function modelling of the real estate market. As a result of selecting the model best fitted to the market data, in which the dependent variable is always the price of a real property, a set of market attributes is obtained, which in this model are considered to be price-determining. In the second stage, from the collection of real estate which served as a database in the process of estimating model parameters, the selected objects are those which are most similar to the one subject to valuation and form the basis for predicting the final value of the property being valued. Assessing the degree of similarity between real properties can be carried out based on the full spectrum of real estate attributes that potentially affect their value and which it is possible to gather information about, or only on the basis of those attributes which were considered to be price-determining in function modelling. It can also be performed by various methods. This article has examined the effect of various approaches on the final value of the property obtained using the two-stage prediction. In order fulfill the study aim precisely as possible, the results of each calculation step of the algorithm have been investigated in detail. Each of them points to the independence of the two procedures.

  1. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  2. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  3. The PickPocket method for predicting binding specificities for receptors based on receptor pocket similarities: application to MHC-peptide binding

    DEFF Research Database (Denmark)

    Zhang, H.; Lund, Ole; Nielsen, M.

    2009-01-01

    of the specificities of MHC molecules in this library weighted by the similarity of their pocket-residues to the query. This PickPocket method is demonstrated to accurately predict MHC-peptide binding for a broad range of MHC alleles, including human and non-human species. In contrast to neural network-based pan-specific......Motivation: Receptor-ligand interactions play an important role in controlling many biological systems. One prominent example is the binding of peptides to the major histocompatibility complex (MHC) molecules controlling the onset of cellular immune responses. Thousands of MHC allelic versions...... exist, making determination of the binding specificity for each variant experimentally infeasible. Here, we present a method that can extrapolate from variants with known binding specificity to those where no experimental data are available. Results: For each position in the peptide ligand, we extracted...

  4. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  5. The perfectionism model of binge eating: testing unique contributions, mediating mechanisms, and cross-cultural similarities using a daily diary methodology.

    Science.gov (United States)

    Sherry, Simon B; Sabourin, Brigitte C; Hall, Peter A; Hewitt, Paul L; Flett, Gordon L; Gralnick, Tara M

    2014-12-01

    The perfectionism model of binge eating (PMOBE) is an integrative model explaining the link between perfectionism and binge eating. This model proposes socially prescribed perfectionism confers risk for binge eating by generating exposure to 4 putative binge triggers: interpersonal discrepancies, low interpersonal esteem, depressive affect, and dietary restraint. The present study addresses important gaps in knowledge by testing if these 4 binge triggers uniquely predict changes in binge eating on a daily basis and if daily variations in each binge trigger mediate the link between socially prescribed perfectionism and daily binge eating. Analyses also tested if proposed mediational models generalized across Asian and European Canadians. The PMOBE was tested in 566 undergraduate women using a 7-day daily diary methodology. Depressive affect predicted binge eating, whereas anxious affect did not. Each binge trigger uniquely contributed to binge eating on a daily basis. All binge triggers except for dietary restraint mediated the relationship between socially prescribed perfectionism and change in daily binge eating. Results suggested cross-cultural similarities, with the PMOBE applying to both Asian and European Canadian women. The present study advances understanding of the personality traits and the contextual conditions accompanying binge eating and provides an important step toward improving treatments for people suffering from eating binges and associated negative consequences.

  6. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory

    Science.gov (United States)

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip

    2009-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  7. Model predictive control of a crude oil distillation column

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    1999-04-01

    Full Text Available The project of designing and implementing model based predictive control on the vacuum distillation column at the Nynäshamn Refinery of Nynäs AB is described in this paper. The paper describes in detail the modeling for the model based control, covers the controller implementation, and documents the benefits gained from the model based controller.

  8. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  9. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    Science.gov (United States)

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Toward Big Data Analytics: Review of Predictive Models in Management of Diabetes and Its Complications.

    Science.gov (United States)

    Cichosz, Simon Lebech; Johansen, Mette Dencker; Hejlesen, Ole

    2015-10-14

    Diabetes is one of the top priorities in medical science and health care management, and an abundance of data and information is available on these patients. Whether data stem from statistical models or complex pattern recognition models, they may be fused into predictive models that combine patient information and prognostic outcome results. Such knowledge could be used in clinical decision support, disease surveillance, and public health management to improve patient care. Our aim was to review the literature and give an introduction to predictive models in screening for and the management of prevalent short- and long-term complications in diabetes. Predictive models have been developed for management of diabetes and its complications, and the number of publications on such models has been growing over the past decade. Often multiple logistic or a similar linear regression is used for prediction model development, possibly owing to its transparent functionality. Ultimately, for prediction models to prove useful, they must demonstrate impact, namely, their use must generate better patient outcomes. Although extensive effort has been put in to building these predictive models, there is a remarkable scarcity of impact studies. © 2015 Diabetes Technology Society.

  11. Aquatic pathways model to predict the fate of phenolic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Aaberg, R.L.; Peloquin, R.A.; Strenge, D.L.; Mellinger, P.J.

    1983-04-01

    Organic materials released from energy-related activities could affect human health and the environment. To better assess possible impacts, we developed a model to predict the fate of spills or discharges of pollutants into flowing or static bodies of fresh water. A computer code, Aquatic Pathways Model (APM), was written to implement the model. The computer programs use compartmental analysis to simulate aquatic ecosystems. The APM estimates the concentrations of chemicals in fish tissue, water and sediment, and is therefore useful for assessing exposure to humans through aquatic pathways. The APM will consider any aquatic pathway for which the user has transport data. Additionally, APM will estimate transport rates from physical and chemical properties of chemicals between several key compartments. The major pathways considered are biodegradation, fish and sediment uptake, photolysis, and evaporation. The model has been implemented with parameters for distribution of phenols, an important class of compounds found in the water-soluble fractions of coal liquids. Current modeling efforts show that, in comparison with many pesticides and polyaromatic hydrocarbons (PAH), the lighter phenolics (the cresols) are not persistent in the environment. The properties of heavier molecular weight phenolics (indanols, naphthols) are not well enough understood at this time to make similar judgements. For the twelve phenolics studied, biodegradation appears to be the major pathway for elimination from aquatic environments. A pond system simulation (using APM) of a spill of solvent refined coal (SRC-II) materials indicates that phenol, cresols, and other single cyclic phenolics are degraded to 16 to 25 percent of their original concentrations within 30 hours. Adsorption of these compounds into sediments and accumulation by fish was minor.

  12. Predictive models for acute kidney injury following cardiac surgery.

    Science.gov (United States)

    Demirjian, Sevag; Schold, Jesse D; Navia, Jose; Mastracci, Tara M; Paganini, Emil P; Yared, Jean-Pierre; Bashour, Charles A

    2012-03-01

    Accurate prediction of cardiac surgery-associated acute kidney injury (AKI) would improve clinical decision making and facilitate timely diagnosis and treatment. The aim of the study was to develop predictive models for cardiac surgery-associated AKI using presurgical and combined pre- and intrasurgical variables. Prospective observational cohort. 25,898 patients who underwent cardiac surgery at Cleveland Clinic in 2000-2008. Presurgical and combined pre- and intrasurgical variables were used to develop predictive models. Dialysis therapy and a composite of doubling of serum creatinine level or dialysis therapy within 2 weeks (or discharge if sooner) after cardiac surgery. Incidences of dialysis therapy and the composite of doubling of serum creatinine level or dialysis therapy were 1.7% and 4.3%, respectively. Kidney function parameters were strong independent predictors in all 4 models. Surgical complexity reflected by type and history of previous cardiac surgery were robust predictors in models based on presurgical variables. However, the inclusion of intrasurgical variables accounted for all explained variance by procedure-related information. Models predictive of dialysis therapy showed good calibration and superb discrimination; a combined (pre- and intrasurgical) model performed better than the presurgical model alone (C statistics, 0.910 and 0.875, respectively). Models predictive of the composite end point also had excellent discrimination with both presurgical and combined (pre- and intrasurgical) variables (C statistics, 0.797 and 0.825, respectively). However, the presurgical model predictive of the composite end point showed suboptimal calibration (P predictive models in other cohorts is required before wide-scale application. We developed and internally validated 4 new models that accurately predict cardiac surgery-associated AKI. These models are based on readily available clinical information and can be used for patient counseling, clinical

  13. Compensating for literature annotation bias when predicting novel drug-disease relationships through Medical Subject Heading Over-representation Profile (MeSHOP) similarity.

    Science.gov (United States)

    Cheung, Warren A; Ouellette, B F Francis; Wasserman, Wyeth W

    2013-01-01

    Using annotations to the articles in MEDLINE®/PubMed®, over six thousand chemical compounds with pharmacological actions have been tracked since 1996. Medical Subject Heading Over-representation Profiles (MeSHOPs) quantitatively leverage the literature associated with biological entities such as diseases or drugs, providing the opportunity to reposition known compounds towards novel disease applications. A MeSHOP is constructed by counting the number of times each medical subject term is assigned to an entity-related research publication in the MEDLINE database and calculating the significance of the count by comparing against the count of the term in a background set of publications. Based on the expectation that drugs suitable for treatment of a disease (or disease symptom) will have similar annotation properties to the disease, we successfully predict drug-disease associations by comparing MeSHOPs of diseases and drugs. The MeSHOP comparison approach delivers an 11% improvement over bibliometric baselines. However, novel drug-disease associations are observed to be biased towards drugs and diseases with more publications. To account for the annotation biases, a correction procedure is introduced and evaluated. By explicitly accounting for the annotation bias, unexpectedly similar drug-disease pairs are highlighted as candidates for drug repositioning research. MeSHOPs are shown to provide a literature-supported perspective for discovery of new links between drugs and diseases based on pre-existing knowledge.

  14. Modeling number of claims and prediction of total claim amount

    Science.gov (United States)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  15. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  16. Comparative Evaluation of the Predictive Performances of Three Different Structural Population Pharmacokinetic Models To Predict Future Voriconazole Concentrations.

    Science.gov (United States)

    Farkas, Andras; Daroczi, Gergely; Villasurda, Phillip; Dolton, Michael; Nakagaki, Midori; Roberts, Jason A

    2016-11-01

    Bayesian methods for voriconazole therapeutic drug monitoring (TDM) have been reported previously, but there are only sparse reports comparing the accuracy and precision of predictions of published models. Furthermore, the comparative accuracy of linear, mixed linear and nonlinear, or entirely nonlinear models may be of high clinical relevance. In this study, models were coded into individually designed optimum dosing strategies (ID-ODS) with voriconazole concentration data analyzed using inverse Bayesian modeling. The data used were from two independent data sets, patients with proven or suspected invasive fungal infections (n = 57) and hematopoietic stem cell transplant recipients (n = 10). Observed voriconazole concentrations were predicted whereby for each concentration value, the data available to that point were used to predict that value. The mean prediction error (ME) and mean squared prediction error (MSE) and their 95% confidence intervals (95% CI) were calculated to measure absolute bias and precision, while ΔME and ΔMSE and their 95% CI were used to measure relative bias and precision, respectively. A total of 519 voriconazole concentrations were analyzed using three models. MEs (95% CI) were 0.09 (-0.02, 0.22), 0.23 (0.04, 0.42), and 0.35 (0.16 to 0.54) while the MSEs (95% CI) were 2.1 (1.03, 3.17), 4.98 (0.90, 9.06), and 4.97 (-0.54 to 10.48) for the linear, mixed, and nonlinear models, respectively. In conclusion, while simulations with the linear model were found to be slightly more accurate and similarly precise, the small difference in accuracy is likely negligible from the clinical point of view, making all three approaches appropriate for use in a voriconazole TDM program. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  17. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...

  18. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Science.gov (United States)

    Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  19. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  20. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  1. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  2. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. A simple physical model predicts small exon length variations.

    Directory of Open Access Journals (Sweden)

    2006-04-01

    Full Text Available One of the most common splice variations are small exon length variations caused by the use of alternative donor or acceptor splice sites that are in very close proximity on the pre-mRNA. Among these, three-nucleotide variations at so-called NAGNAG tandem acceptor sites have recently attracted considerable attention, and it has been suggested that these variations are regulated and serve to fine-tune protein forms by the addition or removal of a single amino acid. In this paper we first show that in-frame exon length variations are generally overrepresented and that this overrepresentation can be quantitatively explained by the effect of nonsense-mediated decay. Our analysis allows us to estimate that about 50% of frame-shifted coding transcripts are targeted by nonsense-mediated decay. Second, we show that a simple physical model that assumes that the splicing machinery stochastically binds to nearby splice sites in proportion to the affinities of the sites correctly predicts the relative abundances of different small length variations at both boundaries. Finally, using the same simple physical model, we show that for NAGNAG sites, the difference in affinities of the neighboring sites for the splicing machinery accurately predicts whether splicing will occur only at the first site, splicing will occur only at the second site, or three-nucleotide splice variants are likely to occur. Our analysis thus suggests that small exon length variations are the result of stochastic binding of the spliceosome at neighboring splice sites. Small exon length variations occur when there are nearby alternative splice sites that have similar affinity for the splicing machinery.

  4. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  5. Interactions of Team Mental Models and Monitoring Behaviors Predict Team Performance in Simulated Anesthesia Inductions

    Science.gov (United States)

    Burtscher, Michael J.; Kolbe, Michaela; Wacker, Johannes; Manser, Tanja

    2011-01-01

    In the present study, we investigated how two team mental model properties (similarity vs. accuracy) and two forms of monitoring behavior (team vs. systems) interacted to predict team performance in anesthesia. In particular, we were interested in whether the relationship between monitoring behavior and team performance was moderated by team…

  6. Hidden Markov model to predict the amino acid profile

    Science.gov (United States)

    Handamari, Endang Wahyu

    2017-12-01

    Sequence alignment is the basic method in sequence analysis, which is the process of composing or aligning two or more primary sequences so that the sequence similarity is apparent. One of the uses of this method is to predict the structure or function of an unknown protein by using a known protein information structure or function if the protein has the same sequence in database. Protein are macromolecules that make up more than half of the cell. Proteins are a chain of 20 amino acid combinations. Each type of protein has a unique number and sequence of amino acids. The method that can be applied for sequence alignment is the Genetic Algorithm, the other method is related to the Hidden Markov Model (HMM). The Hidden Markov Model (HMM) is a developmental form of the Markov Chain, which can be applied in cases that can not be directly observed. As Observed State (O) for sequence alignment is the sequence of amino acids in three categories: deletion, insertion and match. As for the Hidden State is the amino acid residue, which can determine the family protein corresponds to observation O.

  7. Femtocells Sharing Management using mobility prediction model

    OpenAIRE

    Barth, Dominique; Choutri, Amira; Kloul, Leila; Marcé, Olivier

    2013-01-01

    Bandwidth sharing paradigm constitutes an incentive solution for the serious capacity management problem faced by operators as femtocells owners are able to offer a QoS guaranteed network access to mobile users in their femtocell coverage. In this paper, we consider a technico-economic bandwidth sharing model based on a reinforcement learning algorithm. Because such a model does not allow the convergence of the learning algorithm, due to the small size of the femtocells, the mobile users velo...

  8. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  9. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on

  10. Visual Similarity of Words Alone Can Modulate Hemispheric Lateralization in Visual Word Recognition: Evidence From Modeling Chinese Character Recognition.

    Science.gov (United States)

    Hsiao, Janet H; Cheung, Kit

    2016-03-01

    In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.

  11. General relativistic self-similar waves that induce an anomalous acceleration into the standard model of cosmology

    CERN Document Server

    Smoller, Joel

    2012-01-01

    We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\

  12. North Atlantic climate model bias influence on multiyear predictability

    Science.gov (United States)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  13. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  14. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  15. Micro-mechanical studies on graphite strength prediction models

    Science.gov (United States)

    Kanse, Deepak; Khan, I. A.; Bhasin, V.; Vaze, K. K.

    2013-06-01

    The influence of type of loading and size-effects on the failure strength of graphite were studied using Weibull model. It was observed that this model over-predicts size effect in tension. However, incorporation of grain size effect in Weibull model, allows a more realistic simulation of size effects. Numerical prediction of strength of four-point bend specimen was made using the Weibull parameters obtained from tensile test data. Effective volume calculations were carried out and subsequently predicted strength was compared with experimental data. It was found that Weibull model can predict mean flexural strength with reasonable accuracy even when grain size effect was not incorporated. In addition, the effects of microstructural parameters on failure strength were analyzed using Rose and Tucker model. Uni-axial tensile, three-point bend and four-point bend strengths were predicted using this model and compared with the experimental data. It was found that this model predicts flexural strength within 10%. For uni-axial tensile strength, difference was 22% which can be attributed to less number of tests on tensile specimens. In order to develop failure surface of graphite under multi-axial state of stress, an open ended hollow tube of graphite was subjected to internal pressure and axial load and Batdorf model was employed to calculate failure probability of the tube. Bi-axial failure surface was generated in the first and fourth quadrant for 50% failure probability by varying both internal pressure and axial load.

  16. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  17. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  18. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  19. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  20. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  1. Modelling Niche Differentiation of Co-Existing, Elusive and Morphologically Similar Species: A Case Study of Four Macaque Species in Nakai-Nam Theun National Protected Area, Laos

    Directory of Open Access Journals (Sweden)

    Camille N. Z. Coudrat

    2013-01-01

    Full Text Available Species misidentification often occurs when dealing with co-existing and morphologically similar species such as macaques, making the study of their ecology challenging. To overcome this issue, we use reliable occurrence data from camera-trap images and transect survey data to model their respective ecological niche and potential distribution locally in Nakai-Nam Theun National Protected Area (NNT NPA, central-Eastern Laos. We investigate niche differentiation of morphologically similar species using four sympatric macaque species in NNT NPA, as our model species: rhesus Macaca mulatta (Taxonomic Serial Number, TSN 180099, Northern pig-tailed M. leonina (TSN not listed; Assamese M. assamensis (TSN 573018 and stump-tailed M. arctoides (TSN 573017. We examine the implications for their conservation. We obtained occurrence data of macaque species from systematic 2006–2011 camera-trapping surveys and 2011–2012 transect surveys and model their niche and potential distribution with MaxEnt software using 25 environmental and topographic variables. The respective suitable habitat predicted for each species reveals niche segregation between the four species with a gradual geographical distribution following an environmental gradient within the study area. Camera-trapping positioned at many locations can increase elusive-species records with a relatively reduced and more systematic sampling effort and provide reliable species occurrence data. These can be used for environmental niche modelling to study niche segregation of morphologically similar species in areas where their distribution remains uncertain. Examining unresolved species' niches and potential distributions can have crucial implications for future research and species' management and conservation even in the most remote regions and for the least-known species.

  2. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  3. Geospatial application of the Water Erosion Prediction Project (WEPP) Model

    Science.gov (United States)

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2011-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...

  4. Reduced order modelling and predictive control of multivariable ...

    Indian Academy of Sciences (India)

    Anuj Abraham

    2018-03-16

    Mar 16, 2018 ... The performance of constraint generalized predictive control scheme is found to be superior to that of the conventional PID controller in terms of overshoot, settling time and performance indices, mainly ISE, IAE and MSE. Keywords. Predictive control; distillation column; reduced order model; dominant pole; ...

  5. Predictive Multiple Model Switching Control with the Self-Organizing Map

    Science.gov (United States)

    Motter, Mark A.

    2000-01-01

    A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.

  6. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  7. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  8. Dietary information improves cardiovascular disease risk prediction models.

    Science.gov (United States)

    Baik, I; Cho, N H; Kim, S H; Shin, C

    2013-01-01

    Data are limited on cardiovascular disease (CVD) risk prediction models that include dietary predictors. Using known risk factors and dietary information, we constructed and evaluated CVD risk prediction models. Data for modeling were from population-based prospective cohort studies comprised of 9026 men and women aged 40-69 years. At baseline, all were free of known CVD and cancer, and were followed up for CVD incidence during an 8-year period. We used Cox proportional hazard regression analysis to construct a traditional risk factor model, an office-based model, and two diet-containing models and evaluated these models by calculating Akaike information criterion (AIC), C-statistics, integrated discrimination improvement (IDI), net reclassification improvement (NRI) and calibration statistic. We constructed diet-containing models with significant dietary predictors such as poultry, legumes, carbonated soft drinks or green tea consumption. Adding dietary predictors to the traditional model yielded a decrease in AIC (delta AIC=15), a 53% increase in relative IDI (P-value for IDI NRI (category-free NRI=0.14, P NRI (category-free NRI=0.08, P<0.01) compared with the office-based model. The calibration plots for risk prediction demonstrated that the inclusion of dietary predictors contributes to better agreement in persons at high risk for CVD. C-statistics for the four models were acceptable and comparable. We suggest that dietary information may be useful in constructing CVD risk prediction models.

  9. Unsupervised ship trajectory modeling and prediction using compression and clustering

    NARCIS (Netherlands)

    de Vries, G.; van Someren, M.; van Erp, M.; Stehouwer, H.; van Zaanen, M.

    2009-01-01

    In this paper we show how to build a model of ship trajectories in a certain maritime region and use this model to predict future ship movements. The presented method is unsupervised and based on existing compression (line-simplification) and clustering techniques. We evaluate the model with a

  10. Prediction of annual rainfall pattern using Hidden Markov Model ...

    African Journals Online (AJOL)

    A hidden Markov model to predict annual rainfall pattern has been presented in this paper. The model is developed to provide necessary information for the farmers, agronomists, water resource management scientists and policy makers to enable them plan for the uncertainty of annual rainfall. The model classified annual ...

  11. The Selection of Turbulence Models for Prediction of Room Airflow

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...

  12. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...

  13. Monthly to seasonal low flow prediction: statistical versus dynamical models

    Science.gov (United States)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    While the societal and economical impacts of floods are well documented and assessable, the impacts of lows flows are less studied and sometimes overlooked. For example, over the western part of Europe, due to intense inland waterway transportation, the economical loses due to low flows are often similar compared to the ones due to floods. In general, the low flow aspect has the tendency to be underestimated by the scientific community. One of the best examples in this respect is the facts that at European level most of the countries have an (early) flood alert system, but in many cases no real information regarding the development, evolution and impacts of droughts. Low flows, occurring during dry periods, may result in several types of problems to society and economy: e.g. lack of water for drinking, irrigation, industrial use and power production, deterioration of water quality, inland waterway transport, agriculture, tourism, issuing and renewing waste disposal permits, and for assessing the impact of prolonged drought on aquatic ecosystems. As such, the ever-increasing demand on water resources calls for better a management, understanding and prediction of the water deficit situation and for more reliable and extended studies regarding the evolution of the low flow situations. In order to find an optimized monthly to seasonal forecast procedure for the German waterways, the Federal Institute of Hydrology (BfG) is exploring multiple approaches at the moment. On the one hand, based on the operational short- to medium-range forecasting chain, existing hydrological models are forced with two different hydro-meteorological inputs: (i) resampled historical meteorology generated by the Ensemble Streamflow Prediction approach and (ii) ensemble (re-) forecasts of ECMWF's global coupled ocean-atmosphere general circulation model, which have to be downscaled and bias corrected before feeding the hydrological models. As a second approach BfG evaluates in cooperation with

  14. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  15. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    constructed from geological and hydrological data. However, geophysical data are increasingly used to inform hydrogeologic models because they are collected at lower cost and much higher density than geological and hydrological data. Despite increased use of geophysics, it is still unclear whether...... the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... collecting geophysical data. At a minimum, an analysis should be conducted assuming settings that are favorable for the chosen geophysical method. If the analysis suggests that data collected by the geophysical method is unlikely to improve model prediction performance under these favorable settings...

  16. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  17. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  18. Mortality Risk Prediction in Scleroderma-Related Interstitial Lung Disease: The SADL Model.

    Science.gov (United States)

    Morisset, Julie; Vittinghoff, Eric; Elicker, Brett M; Hu, Xiaowen; Le, Stephanie; Ryu, Jay H; Jones, Kirk D; Haemel, Anna; Golden, Jeffrey A; Boin, Francesco; Ley, Brett; Wolters, Paul J; King, Talmadge E; Collard, Harold R; Lee, Joyce S

    2017-11-01

    Interstitial lung disease (ILD) is an important cause of morbidity and mortality in patients with scleroderma (Scl). Risk prediction and prognostication in patients with Scl-ILD are challenging because of heterogeneity in the disease course. We aimed to develop a clinical mortality risk prediction model for Scl-ILD. Patients with Scl-ILD were identified from two ongoing longitudinal cohorts: 135 patients at the University of California, San Francisco (derivation cohort) and 90 patients at the Mayo Clinic (validation cohort). Using these two separate cohorts, a mortality risk prediction model was developed and validated by testing every potential candidate Cox model, each including three or four variables of a possible 19 clinical predictors, for time to death. Model discrimination was assessed using the C-index. Three variables were included in the final risk prediction model (SADL): ever smoking history, age, and diffusing capacity of the lung for carbon monoxide (% predicted). This continuous model had similar performance in the derivation (C-index, 0.88) and validation (C-index, 0.84) cohorts. We created a point scoring system using the combined cohort (C-index, 0.82) and used it to identify a classification with low, moderate, and high mortality risk at 3 years. The SADL model uses simple, readily accessible clinical variables to predict all-cause mortality in Scl-ILD. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  19. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, Linda

    1999-01-01

    and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...... and by the absence of 'agonizing' pain and of symptoms coinciding with pain (P model 15 of 18 predicted patients had postoperative pain (PVpos = 0.83). Of 62 patients predicted as having no pain postoperatively, 56 were pain-free (PVneg = 0.90). Overall accuracy...... was 89%. CONCLUSION: From this prospective study a model based on preoperative symptoms was developed to predict postcholecystectomy pain. Since intrastudy reclassification may give too optimistic results, the model should be validated in future studies....

  20. Prediction of Chemical Function: Model Development and Application

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...

  1. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  2. FPGA implementation of predictive degradation model for engine oil lifetime

    Science.gov (United States)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  3. Predictive Modeling: A New Paradigm for Managing Endometrial Cancer.

    Science.gov (United States)

    Bendifallah, Sofiane; Daraï, Emile; Ballester, Marcos

    2016-03-01

    With the abundance of new options in diagnostic and treatment modalities, a shift in the medical decision process for endometrial cancer (EC) has been observed. The emergence of individualized medicine and the increasing complexity of available medical data has lead to the development of several prediction models. In EC, those clinical models (algorithms, nomograms, and risk scoring systems) have been reported, especially for stratifying and subgrouping patients, with various unanswered questions regarding such things as the optimal surgical staging for lymph node metastasis as well as the assessment of recurrence and survival outcomes. In this review, we highlight existing prognostic and predictive models in EC, with a specific focus on their clinical applicability. We also discuss the methodologic aspects of the development of such predictive models and the steps that are required to integrate these tools into clinical decision making. In the future, the emerging field of molecular or biochemical markers research may substantially improve predictive and treatment approaches.

  4. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  5. Predictive modeling in catalysis - from dream to reality

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2009-01-01

    In silico catalyst optimization is the ultimate application of computers in catalysis. This article provides an overview of the basic concepts of predictive modeling and describes how this technique can be used in catalyst and reaction design.

  6. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  7. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  8. Predictive Modeling of Partitioned Systems: Implementation and Applications

    OpenAIRE

    Latten, Christine

    2014-01-01

    A general mathematical methodology for predictive modeling of coupled multi-physics systems is implemented and has been applied without change to an illustrative heat conduction example and reactor physics benchmarks.

  9. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  10. Potential carcinogenicity predicted by computational toxicity evaluation of thiophosphate pesticides using QSTR/QSCarciAR model.

    Science.gov (United States)

    Petrescu, Alina-Maria; Ilia, Gheorghe

    2017-07-01

    This study presents in silico prediction of toxic activities and carcinogenicity, represented by the potential carcinogenicity DSSTox/DBS, based on vector regression with a new Kernel activity, and correlating the predicted toxicity values through a QSAR model, namely: QSTR/QSCarciAR (quantitative structure toxicity relationship/quantitative structure carcinogenicity-activity relationship) described by 2D, 3D descriptors and biological descriptors. The results showed a connection between carcinogenicity (compared to the structure of a compound) and toxicity, as a basis for future studies on this subject, but each prediction is based on structurally similar compounds and the reactivation of the substructures of these compounds.

  11. Model Predictive Control for Ethanol Steam Reformers

    OpenAIRE

    Li, Mingming

    2014-01-01

    This thesis firstly proposes a new approach of modelling an ethanol steam reformer (ESR) for producing pure hydrogen. Hydrogen has obvious benefits as an alternative for feeding the proton exchange membrane fuel cells (PEMFCs) to produce electricity. However, an important drawback is that the hydrogen distribution and storage have high cost. So the ESR is regarded as a way to overcome these difficulties. Ethanol is currently considered as a promising energy source under the res...

  12. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  13. Wireless model predictive control: Application to water-level system

    Directory of Open Access Journals (Sweden)

    Ramdane Hedjar

    2016-04-01

    Full Text Available This article deals with wireless model predictive control of a water-level control system. The objective of the model predictive control algorithm is to constrain the control signal inside saturation limits and maintain the water level around the desired level. Linear modeling of any nonlinear plant leads to parameter uncertainties and non-modeled dynamics in the linearized mathematical model. These uncertainties induce a steady-state error in the output response of the water level. To eliminate this steady-state error and increase the robustness of the control algorithm, an integral action is included in the closed loop. To control the water-level system remotely, the communication between the controller and the process is performed using radio channel. To validate the proposed scheme, simulation and real-time implementation of the algorithm have been conducted, and the results show the effectiveness of wireless model predictive control with integral action.

  14. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  15. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  16. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  17. A Highly Similar Mathematical Model for Cerebral Blood Flow Velocity in Geriatric Patients with Suspected Cerebrovascular Disease

    Science.gov (United States)

    Liu, Bo; Li, Qi; Wang, Jisheng; Xiang, Hu; Ge, Hong; Wang, Hui; Xie, Peng

    2015-10-01

    Cerebral blood flow velocity(CBFV) is an important parameter for study of cerebral hemodynamics. However, a simple and highly similar mathematical model has not yet been established for analyzing CBFV. To alleviate this issue, through TCD examination in 100 geriatric patients with suspected cerebrovascular disease (46 males and 54 females), we established a representative eighth-order Fourier function Vx(t) that simulates the CBFV. The measured TCD waveforms were compared to those derived from Vx(t), an illustrative Kolmogorov-Smirnov test was employed to determine the validity. The results showed that the TCD waves could been reconstructed for patients with different CBFVs by implementing their variable heart rates and the formulated maximum/minimum of Vx(t). Comparisons between derived and measured TCD waveforms suggest that the two waveforms are very similar. The results confirm that CBFV can be well-modeled through an eighth-order Fourier function. This function Vx(t) can be used extensively for a prospective study of cerebral hemodynamics in geriatric patients with suspected cerebrovascular disease.

  18. Working Towards a Risk Prediction Model for Neural Tube Defects

    Science.gov (United States)

    Agopian, A.J.; Lupo, Philip J.; Tinker, Sarah C.; Canfield, Mark A.; Mitchell, Laura E.

    2015-01-01

    BACKGROUND Several risk factors have been consistently associated with neural tube defects (NTDs). However, the predictive ability of these risk factors in combination has not been evaluated. METHODS To assess the predictive ability of established risk factors for NTDs, we built predictive models using data from the National Birth Defects Prevention Study, which is a large, population-based study of nonsyndromic birth defects. Cases with spina bifida or anencephaly, or both (n = 1239), and controls (n = 8494) were randomly divided into separate training (75% of cases and controls) and validation (remaining 25%) samples. Multivariable logistic regression models were constructed with the training samples. The predictive ability of these models was evaluated in the validation samples by assessing the area under the receiver operator characteristic curves. An ordinal predictive risk index was also constructed and evaluated. In addition, the ability of classification and regression tree (CART) analysis to identify subgroups of women at increased risk for NTDs in offspring was evaluated. RESULTS The predictive ability of the multivariable models was poor (area under the receiver operating curve: 0.55 for spina bifida only, 0.59 for anencephaly only, and 0.56 for anencephaly and spina bifida combined). The predictive abilities of the ordinal risk indexes and CART models were also low. CONCLUSION Current established risk factors for NTDs are insufficient for population-level prediction of a women’s risk for having affected offspring. Identification of genetic risk factors and novel nongenetic risk factors will be critical to establishing models, with good predictive ability, for NTDs. PMID:22253139

  19. Predictive QSAR Models for the Toxicity of Disinfection Byproducts.

    Science.gov (United States)

    Qin, Litang; Zhang, Xin; Chen, Yuhan; Mo, Lingyun; Zeng, Honghu; Liang, Yanpeng

    2017-10-09

    Several hundred disinfection byproducts (DBPs) in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure-activity relationship (QSAR) models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH-, DNA+ and DNA-. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination ( R ²) > 0.7, explained variance in leave-one-out prediction ( Q ² LOO ) and in leave-many-out prediction ( Q ² LMO ) > 0.6, variance explained in external prediction ( Q ² F1 , Q ² F2 , and Q ² F3 ) > 0.7, and concordance correlation coefficient ( CCC ) > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  20. Predictive QSAR Models for the Toxicity of Disinfection Byproducts

    Directory of Open Access Journals (Sweden)

    Litang Qin

    2017-10-01

    Full Text Available Several hundred disinfection byproducts (DBPs in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure–activity relationship (QSAR models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH−, DNA+ and DNA−. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination (R2 > 0.7, explained variance in leave-one-out prediction (Q2LOO and in leave-many-out prediction (Q2LMO > 0.6, variance explained in external prediction (Q2F1, Q2F2, and Q2F3 > 0.7, and concordance correlation coefficient (CCC > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.